Keeping pace with a fast-growing industry like AI is a tough task. So until an AI can do it for you, here’s a handy roundup of stories from the past week in the world of machine learning, as well as notable research and experiments we didn’t cover ourselves.
This week, movers and shakers in the AI industry, including OpenAI CEO Sam Altman, launched a goodwill visit with policymakers — making the case for their respective approaches to AI regulation. Speaking to reporters in London, Altman warned that the EU’s proposed AI Act, which is set to be finalized next year, could prompt OpenAI to eventually withdraw its services from the bloc.
“We will try to comply, but if we cannot comply, we will cease operations,” he said.
Google CEO Sundar Pichai, also in London, stressed the need for “appropriate” AI guardrails that don’t stifle innovation. And Microsoft’s Brad Smith proposed a five-point blueprint for public governance of AI in a meeting with lawmakers in Washington.
To the extent that is a common thread, tech titans express their willingness to be regulated – as long as it doesn’t interfere with their business ambitions. For example, Smith declined to address the unresolved legal question of whether training AI on copyrighted data (which Microsoft does) is permitted under the fair use doctrine in the US stricter licensing requirements around AI training data. They had to be imposed at the federal level. level, doing so could prove costly for Microsoft and its rivals.
Altman, for his part, appeared to take issue with provisions in the AI Act that require companies to publish summaries of the copyrighted data used to train their AI models, and that the system Making them partially responsible for how the downstream is deployed. The requirements for reducing energy consumption and resource use of AI training – a notoriously computation-intensive process – were also questioned.
The regulatory path overseas remains uncertain. But in America, the OpenAIs of the world may finally be getting their way. Last week, Altman wowed members of the Senate Judiciary Committee with carefully crafted statements about the dangers of AI and his recommendations for regulating it. Sen. John Kennedy (R-LA) was especially respectful: “This is your chance, folks, tell us how to get this right… talk in plain English and tell us what rules to enforce,” They said.
In comments to The Daily Beast, Suresh Venkatasubramanian, director of Brown University’s Center for Tech Responsibility, perhaps summed it up best: “We don’t put arsonists to be in charge of fire departments.” And yet this is in danger of happening here with AI. It will be up to legislators to resist the sweet words of technical execution and press in where it is needed. Only time will tell if they do.
Here are other AI headlines of note from the past few days:
- ChatGPT comes on more devices: Despite being US- and iOS-only ahead, OpenAI’s ChatGPT app is off to a great start, before expanding to 11 more global markets writes, App trackers say the app has already hit half a million downloads in its first six days. It ranks as one of the best-performing new apps both this year and last, topped only by the February 2022 arrival of Trump-backed Twitter clone, Truth Social.
- OpenAI proposes a regulatory body: AI is developing fast enough — and the dangers it could pose are clear enough — that OpenAI’s leadership believes the world needs an international regulatory body to control that nuclear power. The co-founders of OpenAI argued this week that the pace of innovation in AI is so rapid that we cannot expect existing authorities to adequately rein in the technology, so we need new ones.
- Generative AI comes up in Google search: Google announced this week that it’s starting to open up access to new generative AI capabilities in Search after teasing them at its I/O event earlier in the month. With this new update, Google says users can easily get up to speed on a new or complex topic, uncover quick suggestions for specific queries, or get in-depth information like customer ratings and prices on product searches. can do.
- TikTok tests a bot: Chatbots are hugely popular, so it’s no surprise to learn that TikTok is experimenting with its own, too. The bot, called “Taco,” is undergoing limited testing in select markets, where it will appear above a user’s profile on the right-hand side of the TikTok interface and as other buttons for likes, comments, and bookmarks. When tapped, users can ask Taco various questions about the video they are watching or discover new content by asking for suggestions.
- Google on AI pact: Google’s Sundar Pichai has agreed to work with lawmakers in Europe on what is being referred to as an “AI pact” — a stop-gap set of voluntary rules or standards while formal rules on AI are developed . According to a memo, it intends to launch Block The AI Pact “involving all major European and non-European AI actors on a voluntary basis” and ahead of the legal deadline of the aforementioned Pan-European Union AI Act.
- People, but built with AI: With Spotify’s AI DJ, The company trained the AI on the voice of a real person — that of its head of cultural partnerships and podcast host, Xavier “X” Jernigan, It looks like now streamers can turn that same technique to advertising. According to statements made by Bill Simmons, founder of The Ringer, the streaming service is developing AI technology that will be able to use a podcast host’s voice to create host-read ads — without the host actually having to read and record the ad copy. to do.
- Product Imagery through Generative AI: on its Google Marketing Live At this week’s event, Google announced it is launching Product Studio, a new tool that lets merchants easily create product imagery using generative AI. Brands will be able to create imagery within Merchant Center Next, Google’s platform for businesses to manage how their products appear in Google Search.
- Microsoft builds chatbots in Windows: Microsoft is making its ChatGPT-based The Bing experience in Windows 11 — and adding a few tweaks that allow users to ask an agent to help them navigate the OS. The new Windows Copilot is meant to make it easier for Windows users to find and tweak settings without having to delve deep into Windows submenus. But the tools will also allow users to summarize content or write text from the clipboard.
- Anthropic raises more cash: anthropicLeading generative AI startup co-founded by OpenAI veterans has raised $450 million in Series C funding round led by Spark Capital. Anthropic would not disclose how much the round valued its business. But a pitch deck obtained in March suggests it could be in the ballpark of $4 billion.
- Adobe brings Generative AI to Photoshop: Photoshop got an infusion of generative AI this week, including several features that allow users to expand images beyond their limits with AI-generated backgrounds, add objects to images, or take them away. allow use. accuracy compared to previously available content-aware fills. For the time being, these features are only available in beta versions of Photoshop. but they are already because of Some graphic designers are worried about the future of their industry.
other machine learning
Bill Gates may not be an expert in AI, but he Is Very rich, and he’s been right about things before. Turns out he’s bullish on personal AI agents, as he told Fortune: “Whoever wins a personal agent is a big deal, because you’ll never go to a search site again, you’ll never go to a productivity site again.” You’ll never go to Amazon again. How exactly this will work hasn’t been explained, but his instinct that people won’t bother using a compromised search or productivity engine is probably not off base. Is.
Risk assessment in AI models is an evolving science, which means we don’t know anything about it. Google DeepMind (the newly formed eminence encompassing Google Brain and DeepMind) and partners around the world are trying to move the ball forward, and have created a model assessment framework for “high risk” such as “manipulation, deception, Strong skills in cyber-crime, or other dangerous capabilities. Well, this is just the beginning.
Particle physicists are finding interesting ways to apply machine learning to their work: “We’ve shown that we can predict very complex high-dimensional beam shapes from a surprisingly small amount of data,” says Aurélie Edelen of SLAC They say. They built a model that helps them predict the shape of the particle beams in the accelerator, something that would normally take thousands of data points and a lot of computation time. This is much more efficient and can help make accelerators easier to use everywhere. Next Up: “Experimentally Demonstrate an Algorithm on Reconstruction of the Full 6D Phase Space Distribution.” Ok!
Adobe Research and MIT collaborated on an intriguing computer vision problem: telling which pixels in an image represent what Material, Since an object can be of many materials as well as color and other visual aspects, this is a very subtle distinction, but also an intuitive one. They had to create a new synthetic dataset to do this, but at first it didn’t work. So they fine-tuned an existing CV model on that data, and it got right. Why is it useful? Hard to say, but it’s good.
Large language models are usually trained primarily in English for a number of reasons, but obviously the sooner they work in Spanish, Japanese and Hindi, the better. BloomChat is a newer model built on top of Bloom that currently works with 46 languages, and is competitive with GPT-4 and others. It’s still quite experimental so won’t go to production with it but it could be great for testing an AI-adjacent product in multiple languages.
NASA has announced a new crop of SBIR II funding, and it has some interesting AI bits and pieces:
Geolabe is detecting and predicting groundwater variation using AI trained on satellite data, and hopes to implement the model in a new NASA satellite constellation later this year.
The Zeus AI is working on algorithmically creating “3D atmospheric profiles” based on satellite imagery, essentially a rough version of the 2D maps we already have of temperature, humidity, and the like.
Your computing power in space is very limited, and as long as we can make some approximations there, the training is correct. But IEEE researchers want to build a SWaP-efficient neuromorphic processor to train AI models in situ.
Robots acting autonomously in high-stakes situations typically require a human perceiver, and Piknic is looking at such bots to communicate their intentions visually, such as when they want to open a door. How to reach for the thinker, so that the thinker does not have to interfere so much. Maybe a good idea.