Keeping up with a fast-moving industry like AI is difficult. So, until AI can do it for you, here’s a helpful summary of recent stories in the world of machine learning, along with notable research and experiments that we didn’t cover on our own.
This week in AI, Microsoft unveiled a new standard PC keyboard layout with a “Copilot” key. You heard right — from now on, Windows devices will have a dedicated key to launch Microsoft’s AI-powered Copilot assistant, replacing the right Control key.
One imagines this move is intended to signal the seriousness of Microsoft’s investment in the race for AI dominance over consumers (and enterprises, for that matter). It’s the first time Microsoft has changed the Windows keyboard layout in nearly 30 years; Laptops and keyboards with the Copilot Switch are scheduled to ship in late February.
But is it all gossip? Are Windows users really Wants Artificial Intelligence abbreviation – or Microsoft’s flavor of AI period?
Sure enough, Microsoft has made a bid to inject almost all of its old and new products with “co-pilot” functionality. In flashy keynotes, slick demos, and now, an AI switch, the company is making its AI technology stand out — and betting on that to drive demand.
Demand is not a certainty. But to be fair. A few vendors have managed to turn viral AI successes into hits. Look at OpenAI, the maker of ChatGPT, which It said Its annual revenue topped $1.6 billion at the end of 2023. Generative arts platform Midjourney also appears profitable — and it has yet to raise a dime of outside capital.
Focus on few, althoug. Most vendors, burdened by the costs of training and running sophisticated AI models, have had to look for larger and larger tranches of capital to stay afloat. An example of this is said to be anthropic reinforced $750 million in a round that would bring the total amount raised to more than $8 billion.
Microsoft, in collaboration with its chip partners AMD and Intel, hopes that AI processing will increasingly move from expensive data centers to on-premises silicon, commoditizing AI in the process — and that may be true. Intel’s new consumer chipset has cores specifically designed to power artificial intelligence. Additionally, new data center chips like Microsoft’s could make model training a less expensive endeavor than it currently is.
But there is no guarantee. The real test will be to see whether Windows users and enterprise customers, bombarded by what amounts to Copilot ads, show an appetite for the technology — and spend on it. If they don’t, it may not be long before Microsoft has to redesign the Windows keyboard again.
Here are some other noteworthy AI stories from the past few days:
- The co-pilot comes to the mobile phone: In more Copilot news, Microsoft has quietly brought Copilot clients to Android and iOS, along with iPadOS.
- GBT Store: OpenAI has announced plans to launch a store for GPTs, which are custom applications based on AI models of text generation (such as GPT-4), during the next week. The GPT Store was announced last year during OpenAI’s first annual developer conference, DevDay, but was delayed in December — almost certainly due to a leadership change that occurred in November right after the initial announcement.
- OpenAI reduces registration risks: In other OpenAI news, the startup is looking to reduce its regulatory risk in the EU by shifting a significant portion of its offshore business through an Irish entity. The move would reduce the ability of some of the bloc’s privacy watchdogs to act unilaterally on concerns, Natasha wrote.
- Training robots: Brian writes that Google’s DeepMind Robotics team is exploring ways to give robots a better understanding of precisely what we humans want from them. The team’s new system can manage a fleet of robots working side by side and suggest tasks that can be completed by robotic devices.
- New Intel: Intel spins A new platform company, Articul8 AI, is backed by Boca Raton, Florida-based investor and asset manager DigitalBridge. As an Intel spokesperson explains, the Articul8 platform “provides AI capabilities that keep customer data, training, and reasoning within the enterprise security perimeter” — an attractive prospect for customers in highly regulated industries like healthcare and financial services.
- Dark Fishing Industry Exposed: Satellite images and machine learning provide a new, much more detailed look at the maritime industry, specifically the number and activities of fishing and transport vessels at sea. Turns out there is road More so than publicly available data suggests – a fact revealed by new research published in Nature by a team at Global Fishing Watch and several collaborating universities.
- AI-powered search: Perplexity AI, a platform that applies artificial intelligence to web search, has raised $73.6 million in a funding round that values the company at $520 million. Unlike traditional search engines, Perplexity offers a chatbot-like interface that allows users to ask questions in natural language (e.g., “Do we burn calories while sleeping?”, “What is the least visited country?”, etc.).
- Clinical notes, written automatically: And in more funding news, Paris-based startup Nabla It raised a cool $24 million. The company that has Partnership with the Permanent Medical GroupKaiser Permanente, a division of US healthcare giant Kaiser Permanente, is working on an “AI-powered co-pilot” for doctors and other clinical staff that will automatically take notes and write medical reports.
More machine learning
You may remember various examples of interesting work over the past year involving subtle changes to images that cause an error in machine learning models, for example, an image of a dog versus an image of a car. They do this by adding “perturbations,” which are subtle changes to the image’s pixels, in a pattern that only the model can see. Or at least they are belief Only the model can perceive it.
An experiment conducted by researchers at Google DeepMind showed that when an image of flowers was changed to look more cat-like to the AI, people were more likely to describe that image as more cat-like even though it certainly no longer looked like a cat. Same for other common objects like trucks and chairs.
Image credits: Google DeepMind
Why? how? The researchers don’t really know this, and the participants all felt like they were choosing at random (in reality, the effect, although reliable, is rarely above chance). We seem to be more perceptive than we think, but this also has implications for safety and other measures, because it suggests that subliminal signals can actually spread through images without anyone noticing.
Another interesting experiment involving human cognition came out of MIT this week, which used machine learning Help clarify a particular language understanding system. Essentially, some simple sentences, like “I walked to the beach,” hardly require any brain power to decode them, while complex or confusing sentences like “In his aristocratic system he made a dismal revolution” produce more and broader activation, as measured by MRI. Career.
The team compared activation readings in humans who read a variety of these sentences with how the same sentences activated the equivalent of cortical regions in a large language model. They then made a second model that learned how the two activation patterns corresponded to each other. This model was able to predict whether new sentences would affect human perception or not. It may sound a bit mysterious, but it’s definitely very interesting, trust me.
Whether machine learning can mimic human cognition in more complex domains, such as interacting with computer interfaces, is still largely an open question. However, there is a lot of research out there, and it is always worth taking a look at. This week we have See the lawa system from researchers at Ohio State that works by grounding LLM’s explanations of possible actions in real-world examples.
![](https://techcrunch.com/wp-content/uploads/2024/01/action_grounding.png)
Image credits: Ohio State University
Basically, you can tell a system like GPT-4V to create a reservation on a site, and it will get its job done and that it needs to click the “Make Booking” button, but it doesn’t really know how to do that. By improving how it perceives interfaces with clear taxonomies and global knowledge, it can do a lot better, even if it only succeeds a small fraction of the time. These agents have a long way to go, but expect a lot of big claims this year anyway! I heard some of it just today.
Next, check out this interesting solution to a problem I had no idea existed but makes perfect sense. Self-driving ships are a promising area of automation, but when the sea is rough, it’s difficult to ensure they’re on track. GPS and gyroscopes don’t cut it, and visibility can be poor too – but more importantly, the systems that govern them aren’t very complex. So they can veer wildly off target or waste fuel on large detours if they don’t know any better, which is a big problem if you’re using battery power. Even if I wasn’t thinking about it!
Korea Marine and Ocean University (Another thing I learned today) proposes a more robust wayfinding model based on simulating ship motions in a computational fluid dynamics model. They suggest that this better understanding of wave motion and its impact on structures and propulsion could significantly improve the efficiency and safety of autonomous maritime transport. It may also make sense to use it on human-piloted ships whose captains are not entirely sure of the best angle of attack in a given wave or storm shape!
Finally, if you want a good summary of the major advances made in computer science last year, which in 2023 largely overlap with machine learning research, Check out our excellent Quanta review.