We’re Still Waiting for the Next Big Leap in AI

When OpenAI announced GPT-4, its latest large language model, last March, it sent shockwaves through the tech world. It was clearly more capable than anything seen before at chatting, coding, and solving all sorts of thorny problems—including school homework.

Anthropic, a rival to OpenAI, announced today that it has made its own AI advance that will upgrade chatbots and other use cases. But although the new model is the world’s best by some measures, it’s more of a step forward than a big leap.

Anthropic’s new model, called Claude 3.5 Sonnet, is an upgrade to its existing Claude 3 family of AI models. It is more adept at solving math, coding, and logic problems as measured by commonly used benchmarks. Anthropic says it is also a lot faster, better understands nuances in language, and even has a better sense of humor.

That’s no doubt useful to people trying to build apps and services on top of Anthropic’s AI models. But the company’s news is also a reminder that the world is still waiting for another AI leap forward in AI akin to that delivered by GPT-4.

Expectation has been building for OpenAI to release a sequel called GPT-5 for more than a year now, and the company’s CEO, Sam Altman, has encouraged speculation that it will deliver another revolution in AI capabilities. GPT-4 cost more than $100 million to train, and GPT-5 is widely expected to be much larger and more expensive.

Although OpenAI, Google, and other AI developers have released new models that out-do GPT-4, the world is still waiting for that next big leap. Progress in AI has lately become more incremental and more reliant on innovations in model design and training rather than brute-force scaling of model size and computation, as GPT-4 did.

Michael Gerstenhaber, head of product at Anthropic, says the company’s new Claude 3.5 Sonnet model is larger than its predecessor but draws much of its new competence from innovations in training. For example, the model was given feedback designed to improve its logical reasoning skills.

Anthropic says that Claude 3.5 Sonnet outscores the best models from OpenAI, Google, and Facebook in popular AI benchmarks including GPQA, a graduate-level test of expertise in biology, physics, and chemistry; MMLU, a test covering computer science, history, and other topics; and HumanEval, a measure of coding proficiency. The improvements are a matter of a few percentage points though.

This latest progress in AI might not be revolutionary but it is fast-paced: Anthropic only announced its previous generation of models three months ago. “If you look at the rate of change in intelligence you’ll appreciate how fast we’re moving,” Gerstenhaber says.

More than a year after GPT-4 spurred a frenzy of new investment in AI, it may be turning out to be more difficult to produce big new leaps in machine intelligence. With GPT-4 and similar models trained on huge swathes of online text, imagery, and video, it is getting more difficult to find new sources of data to feed to machine-learning algorithms. Making models substantially larger, so they have more capacity to learn, is expected to cost billions of dollars. When OpenAI announced its own recent upgrade last month, with a model that has voice and visual capabilities called GPT-4o, the focus was on a more natural and humanlike interface rather than on substantially more clever problem-solving abilities.

Adobe Says It Won’t Train AI Using Artists’ Work. Creatives Aren’t Convinced

When users first found out about Adobe’s new terms of service (which were quietly updated in February), there was an uproar. Adobe told users it could access their content “through both automated and manual methods” and use “techniques such as machine learning in order to improve [Adobe’s] Services and Software.” Many understood the update as the company forcing users to grant unlimited access to their work, for purposes of training Adobe’s generative AI, known as Firefly.

Late on Tuesday, Adobe issued a clarification: In an updated version of its terms of service agreement, it pledged not to train AI on its users’ content stored locally or in the cloud and gave users the option to opt out of content analytics.

Caught in the crossfire of intellectual property lawsuits, the ambiguous language used to previously update the terms shed light on a climate of acute skepticism among artists, many of whom overrely on Adobe for their work. “They already broke our trust,” says Jon Lam, a senior storyboard artist at Riot Games, referring to how award-winning artist Brian Kesinger discovered generated images in the style of his art being sold under his name on Adobe’s stock image site, without his consent. Earlier this month, the estate of late photographer Ansel Adams publicly scolded Adobe for allegedly selling generative AI imitations of his work.

Scott Belsky, Adobe’s chief strategy officer, had tried to assuage concerns when artists started protesting, clarifying that machine learning refers to the company’s non-generative AI tools—Photoshop’s “Content Aware Fill” tool, which allows users to seamlessly remove objects in an image, is one of the many tools done through machine learning. But while Adobe insists that the updated terms do not give the company content ownership and that it will never use user content to train Firefly, the misunderstanding triggered a bigger discussion about the company’s market monopoly and how a change like this could threaten the livelihoods of artists at any point. Lam is among the artists who still believe that, despite Adobe’s clarification, the company will use work created on its platform to train Firefly without the creators’ consent.

The nervousness over nonconsensual use and monetization of copyrighted work by generative AI models is not new. Early last year, artist Karla Ortiz was able to prompt images of her work using her name on various generative AI models, an offense that gave rise to a class action lawsuit against Midjourney, DeviantArt, and Stability AI. Ortiz was not alone—Polish fantasy artist Greg Rutkowski found that his name was one of the most commonly used prompts in Stable Diffusion when the tool first launched in 2022.

As the owner of Photoshop and creator of PDFs, Adobe has reigned as the industry standard for over 30 years, powering the majority of the creative class. An attempt to acquire product design company Figma was blocked and abandoned in 2023 for antitrust concerns attesting to its size.

Airbnb’s Olympics Push Could Help It Win Over Paris

Short-term rentals can function as a quick release valve for a city expecting an influx of visitors, increasing capacity for a short time nearly instantly. In fact, despite the usual hype around the Olympic Games, there are still many places to stay in Paris this summer.

A search on Airbnb for a two-person stay during the first weekend of the games returned more than 1,000 results, with many charging less than $200 a night. A search for hotel rooms on Expedia turned up only around 20 hotels offering similarly low rates. Hotel prices for the dates of the Olympics have actually fallen in Paris since December, but remain higher than those for the same time last summer, with the average cost of a hotel room during the opening weekend of the games going for around €440 as of May.

Booking rates for short-term rentals during the Olympics are up by 8 percent compared to the dates two weeks before the games across all locations hosting Olympic events, but the number of available rooms has increased by 38 percent, according to AirDNA, a third-party platform that tracks short-term rentals.

The average price in Paris for a short-term rental during the Olympics is $481 a night, while those who booked earlier paid an average of $350. Outside of Paris, rates average $289, up from a previous $198. The “vast majority” of these listings on Airbnb, says Stephenson, come from families listing their primary homes. But other Parisians are begging travelers to stay away, warning that the games will bring chaos, and some are planning to flee the city.

People from more than 160 countries and regions have booked stays on Airbnb for the Olympics, according to the company. The largest influx of tourists comes from the US, with American travelers making up 20 percent of the bookings, and many other guests coming from the UK, Germany, and the Netherlands.

Against that background, and with Airbnb’s marketing push, Jamie Lane, chief economist and senior vice president of research of AirDNA, says it makes sense that more people are signing up with Airbnb to host. “Everyone starts getting Olympic fever,” he says, especially “with Airbnb doing more and more ads and market outreach within the city of Paris.”

Despite the flood of visitors, the ready availability of vacancies suggests that like many athletes competing in Paris, some Airbnb hosts will end the games with disappointment as their listings remained unbooked. But Lane says that in the past, large events have been seen to provide a lasting boost to Airbnb’s footprint in a place. “A city is left with more listings than it had going in,” Lane says. For “people that maybe decide to do it for the first time, it ends up being a good experience. It was very little work. They think: ‘I should do this again.’”

OpenAI-Backed Nonprofits Have Gone Back on Their Transparency Pledges

Neither database mandates nor generally contains up-to-date versions of the records that UBI Charitable and OpenResearch had said they provided in the past.

The original YC Research conflict-of-interest policy that Das did share calls for company insiders to be upfront about transactions in which their impartiality could be questioned and for the board to decide how to proceed.

Das says the policy “may have been amended since OpenResearch’s policies changed (including when the name was changed from YC Research), but the core elements remain the same.”

No Website

UBI Charitable launched in 2020 with $10 million donated from OpenAI, as first reported by TechCrunch last year. UBI Charitable’s aim, according to its government filings, is putting the over $31 million it received by the end of 2022 to support initiatives that try to offset “the societal impacts” of new technologies and ensure no one is left behind. It has donated largely to CitySquare in Dallas and Heartland Alliance in Chicago, both of which work on a range of projects to fight poverty.

UBI Charitable doesn’t appear to have a website but shares a San Francisco address with OpenResearch and OpenAI, and OpenAI staff have been listed on UBI Charitable’s government paperwork. Its three Form 990 filings since launching all state that records including governing documents, financial statements, and a conflict-of-interest policy were available upon request.

Rick Cohen, chief operating and communications officer for National Council of Nonprofits, an advocacy group, says “available upon request” is a standard answer plugged in by accounting firms. OpenAI, OpenResearch, and UBI Charitable have always shared the same San Francisco accounting firm, Fontanello Duffield & Otake, which didn’t respond to a request for comment.

Miscommunication or poor oversight could lead to the standard answer about access to records getting submitted, “even if the organization wasn’t intending to make them available,” Cohen says.

The disclosure question ended up on what’s known as the Form 990 as part of an effort in 2008 to help the increasingly complex world of nonprofits showcase their adherence to governance best practices, at least as implied by the IRS, says Kevin Doyle, senior director of finance and accountability at Charity Navigator, which evaluates nonprofits to help guide donors’ giving decisions. “Having that sort of transparency story is a way to indicate to donors that their money is going to be used responsibly,” Doyle says.

OpenResearch solicits donations on its website, and UBI Charitable stated on its most recent IRS filing that it had received over $27 million in public support. Doyle says Charity Navigator’s data show donations tend to flow to organizations it rates higher, with transparency among the measured factors.

It’s certainly not unheard of for organizations to share a wide range of records. Charity Navigator has found that most of the roughly 900 largest US nonprofits reliant on individual donors publish financial statements on their websites. It doesn’t track disclosure of bylaws or conflict-of-interest policies.

Charity Navigator publishes its own audited financial statements and at least eight nonstandard policies it maintains, including ones on how long it retains documents, how it treats whistleblower complaints, and which gifts staff can accept. “Donors can look into what we’re doing and make their own judgment rather than us operating as a black box, saying, ‘Please give us money, but don’t ask any questions,’” Doyle says.

Cohen of the National Council of Nonprofits cautions that over-disclosure could create vulnerabilities. Posting a disaster-recovery plan, for example, could offer a roadmap to computer hackers. He adds that just because organizations have a policy on paper doesn’t mean they follow it. But knowing what they were supposed to do to evaluate a potential conflict of interest could still allow for more public accountability than otherwise possible, and if AI could be as consequential as Altman envisions, the scrutiny may very well be needed.

Orkut’s Founder Is Still Dreaming of a Social Media Utopia

Before Orkut launched in January 2004, Büyükkökten warned the team that the platform he’d built it on could handle only 200,000 users. It wouldn’t be able to scale. “They said, let’s just launch and see what happens,” he explains. The rest is online history. “It grew so fast. Before we knew it, we had millions of users,” he says.

Orkut featured a digital Scrapbook and the ability to give people compliments (ranging from “trustworthy” to “sexy”), create communities, and curate your very own Crush List. “It reflected all of my personality traits. You could flatter people by saying how cool they were, but you could never say something negative about them,” he says.

At first, Orkut was popular in the US and Japan. But, as predicted, server issues severed its connection to its users. “We started having a lot of scalability issues and infrastructure problems,” Büyükkökten says. They were forced to rewrite the entire platform using C++, Java, and Google’s tools. The process took an entire year, and scores of original users dropped off due to sluggish speeds and one-too-many encounters with Orkut’s now-nostalgic “Bad, bad server, no donut for you” error message.

Around this time, though, the site became incredibly popular in Finland. Büyükkökten was bemused. “I couldn’t figure it out until I spoke to a friend who speaks Finnish. And he said: ‘Do you know what your name means?’ I didn’t. He told me that orkut means multiple orgasms.” Come again? “Yes, so in Finland, everyone thought they were signing up to an adult site. But then they would leave straight after as we couldn’t satisfy them,” he laughs.

Awkward double meanings aside, Orkut continued to spread across the world. In addition to exploding in Estonia, the platform went mega in India. Its true second home, though, was Brazil. “It became a huge success. A lot of people think I’m Brazilian because of this,” Büyükkökten explains. He has a theory about why Brazil went nuts for Orkut. “Brazil’s culture is very welcoming and friendly. It’s all about friendships and they care about connections. They’re also very early adopters of technology,” he says. At its peak, 11 million of Brazil’s 14 million internet users were on Orkut, most logging on through cybercafes. It took Facebook seven years to catch up.
But Orkut wasn’t without its problems (and many fake profiles). The site was banned in Iran and the United Arab Emirates. Government authorities in Brazil and India had concerns about drug-related content and child pornography, something Büyükkökten denies existed on Orkut. Brazilians coined the word orkutização to describe a social media site like Orkut becoming less cool after going mainstream. In 2014, having hemorrhaged users due to slow server speeds, Facebook’s more intuitive interface, and issues surrounding privacy, Orkut went offline. “Vic Gundotra, in charge of Google+, decided against having any competing social products,” Büyükkökten explains.

But Büyükkökten has fond memories. “We had so many stories of people falling in love and moving in together from different parts of the world. I have a friend in Canada who met his wife in Brazil through Orkut, a friend in New York who met his wife in Estonia and now they’re married with two kids.” he says. It also provided a platform for minority communities. “I was talking to a gay journalist from a small town in São Paulo who told me that finding all these LGBTQ people on Orkut transformed his life,” he adds.

Büyükkökten left Google in 2014 and founded a new social network, again featuring a simple five-letter title: Hello. He wanted to focus on positive connection. It used “loves” rather than likes, and users could choose from more than 100 personae, ranging from Cricket Fan to Fashion Enthusiast, and then were connected to like-minded people with common interests. Soft-launched in Brazil in 2018 with 2 million users, Hello enjoyed “ultra-high engagement” that Büyükkökten claims surpassed the likes of Instagram and Twitter. “One of the things that stood out in our user surveys was that people said when they open Hello, it makes them happy.”

The app was downloaded more than 2 million times—a fraction of the users Orkut enjoyed—but Büyükkökten is proud of it. “It surpassed all our dreams. There were numerous instances where our K-Factor (the number of new people that existing users bring to an app) reached 3, leading us to exponential growth,” he says. But, in 2020, Büyükkökten bid goodbye to Hello.
Now he’s working on a new platform. “It’ll leverage AI and machine learning to optimize for improving happiness, bringing people together, fostering communities, empowering users, and creating a better society,” he says. “Connection will be the cornerstone of design, interaction, product, and experience.” And the name? “If I told you the new brand, you would have an aha moment and everything would be crystal clear,” he says.

Once again, it’s driven by his enduring desire to connect people. “One of the biggest ills of society is the decline in social capital. After smartphones and the pandemic, we have stopped hanging out with our friends and don’t know our neighbors. We have a loneliness epidemic,” he says.
He is fiercely critical of current platforms. “My biggest passion in life is connecting people through technology. But when was the last time you met someone on social media? It’s creating shame, pessimism, division, depression, and anxiety,” he says. For Büyükkökten, optimism is more important than optimization. “These companies have engineered the algorithm for revenue,” he says. “But it’s been awful for mental health. The world is terrifying right now and a lot of that has come through social media. There’s so much hate,” he says.

Instead, he wants social media to be a place of love and a facilitator for meeting new people in person. But why will it work this time around? “That’s a really good question,” he says. “One thing that has been really consistent is that people miss Orkut right now.” It’s true—Brazilian social media has recently been abuzz with memes and memories to celebrate the site’s 20th birthday. “A teenage boy even recently drove 10 hours to meet me at a conference to talk about Orkut. And I was like, how is that even possible?” he laughs. Orkut’s landing page is still live, featuring an open letter calling for a social media utopia.

This, along with our collective desire for a more human social media, is what makes Büyükkökten believe that his next platform is one that will truly stick around. Has he decided on that all important name? “We haven’t announced it yet. But I’m really excited. I truly care. I want to bring that authenticity and sense of belonging back,” he concludes. Perhaps, as his Finnish fans would joke, it’s time for Orkut’s second coming.

This story first appeared in the July/August 2024 UK edition of WIRED magazine.

Publishers Target Common Crawl In Fight Over AI Training Data

Danish media outlets have demanded that the nonprofit web archive Common Crawl remove copies of their articles from past data sets and stop crawling their websites immediately. This request was issued amid growing outrage over how artificial intelligence companies like OpenAI are using copyrighted materials.

Common Crawl plans to comply with the request, first issued on Monday. Executive director Rich Skrenta says the organization is “not equipped” to fight media companies and publishers in court.

The Danish Rights Alliance (DRA), an association representing copyright holders in Denmark, spearheaded the campaign. It made the request on behalf of four media outlets, including Berlingske Media and the daily newspaper Jyllands-Posten. The New York Times made a similar request of Common Crawl last year, prior to filing a lawsuit against OpenAI for using its work without permission. In its complaint, the New York Times highlighted how Common Crawl’s data was the most “highly weighted data set” in GPT-3.

Thomas Heldrup, the DRA’s head of content protection and enforcement, says that this new effort was inspired by the Times. “Common Crawl is unique in the sense that we’re seeing so many big AI companies using their data,” Heldrup says. He sees its corpus as a threat to media companies attempting to negotiate with AI titans.

Although Common Crawl has been essential to the development of many text-based generative AI tools, it was not designed with AI in mind. Founded in 2007, the San Francisco–based organization was best known prior to the AI boom for its value as a research tool. “Common Crawl is caught up in this conflict about copyright and generative AI,” says Stefan Baack, a data analyst at the Mozilla Foundation who recently published a report on Common Crawl’s role in AI training. “For many years it was a small niche project that almost nobody knew about.”

Prior to 2023, Common Crawl did not receive a single request to redact data. Now, in addition to the requests from the New York Times and this group of Danish publishers, it’s also fielding an uptick of requests that have not been made public.

In addition to this sharp rise in demands to redact data, Common Crawl’s web crawler, CCBot, is also increasingly thwarted from accumulating new data from publishers. According to the AI detection startup Originality AI, which often tracks the use of web crawlers, more than 44 percent of the top global news and media sites block CCBot. Apart from BuzzFeed, which began blocking it in 2018, most of the prominent outlets it analyzed—including Reuters, the Washington Post, and the CBC—spurned the crawler in only the last year. “They’re being blocked more and more,” Baack says.

Common Crawl’s quick compliance with this kind of request is driven by the realities of keeping a small nonprofit afloat. Compliance does not equate to ideological agreement, though. Skrenta sees this push to remove archival materials from data repositories like Common Crawl as nothing short of an affront to the internet as we know it. “It’s an existential threat,” he says. “They’ll kill the open web.”

Light-Based Chips Could Help Slake AI’s Ever-Growing Thirst for Energy

“What we have here is something incredibly simple,” said Tianwei Wu, the study’s lead author. “We can reprogram it, changing the laser patterns on the fly.” The researchers used the system to design a neural network that successfully discriminated vowel sounds. Most photonic systems need to be trained before they’re built, since training necessarily involves reconfiguring connections. But since this system is easily reconfigured, the researchers trained the model after it was installed on the semiconductor. They now plan to increase the size of the chip and encode more information in different colors of light, which should increase the amount of data it can handle.

It’s progress that even Psaltis, who built the facial recognition system in the ’90s, finds impressive. “Our wildest dreams of 40 years ago were very modest compared to what has actually transpired.”

First Rays of Light

While optical computing has advanced quickly over the past several years, it’s still far from displacing the electronic chips that run neural networks outside of labs. Papers announce photonic systems that work better than electronic ones, but they generally run small models using old network designs and small workloads. And many of the reported figures about photonic supremacy don’t tell the whole story, said Bhavin Shastri of Queen’s University in Ontario. “It’s very hard to do an apples-to-apples comparison with electronics,” he said. “For instance, when they use lasers, they don’t really talk about the energy to power the lasers.”

Lab systems need to be scaled up before they can show competitive advantages. “How big do you have to make it to get a win?” McMahon asked. The answer: exceptionally big. That’s why no one can match a chip made by Nvidia, whose chips power many of the most advanced AI systems today. There is a huge list of engineering puzzles to figure out along the way—issues that the electronics side has solved over decades. “Electronics is starting with a big advantage,” said McMahon.

Some researchers think ONN-based AI systems will first find success in specialized applications where they provide unique advantages. Shastri said one promising use is in counteracting interference between different wireless transmissions, such as 5G cellular towers and the radar altimeters that help planes navigate. Early this year, Shastri and several colleagues created an ONN that can sort out different transmissions and pick out a signal of interest in real time and with a processing delay of under 15 picoseconds (15 trillionths of a second)—less than one-thousandth of the time an electronic system would take, while using less than 1/70 of the power.

But McMahon said the grand vision—an optical neural network that can surpass electronic systems for general use—remains worth pursuing. Last year his group ran simulations showing that, within a decade, a sufficiently large optical system could make some AI models more than 1,000 times as efficient as future electronic systems. “Lots of companies are now trying hard to get a 1.5-times benefit. A thousand-times benefit, that would be amazing,” he said. “This is maybe a 10-year project—if it succeeds.”


Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

As Google Targets AI Search Ads, It Could Learn a Lot From Bing

Disclosure of ads has been an issue on Copilot as well. Though Microsoft says it labels all ads, Marcus Pratt, senior vice president for insights and technology at the ad-buying agency Mediasmith, says he’s encountered at least two searches in which links with indications that they are sponsored arguably haven’t been adequately disclosed.

Last week, Pratt looked up the best reels to wind up and store his garden hose. Copilot recommended eight options, all apparently lifted from an article from the reviews publication Spruce, which links to Amazon product listings and gets a commission when readers make a purchase. When clicking on the reels in Copilot, he ended up on giraffetools.com, with code in the URL suggesting it had been a sponsored link. But an “Ad” label is only visible if a user hovers over the link for a moment before clicking. Spruce and Giraffe Tools didn’t respond to requests for comment.

In the other search, Copilot recommended a Nike Pegasus running shoe, but when hovering over the name, Microsoft showed a link to the shoe brand On with a small “Ad” label in the corner. A link to a Women’s Health article with more details about the Nike pair is below the ad. Pratt calls it a potentially dissatisfying experience for brands and a confusing one for consumers. “This blending of organic recommendations and sponsored listings is blurring the lines more than I have seen in the past,” he says. Nike, On, and Women’s Health didn’t respond to requests for comment.

Microsoft’s Sainsbury-Carter says ad experiences may vary as Microsoft continues testing and applying feedback.

Despite optimism among investors in the tech giants’ abilities to smooth out the rough edges and keep sales flowing, mixing AI-generated content into search is the industry’s biggest shift since the advent of smartphones. Google is trying to quickly satisfy people’s curiosity by using AI Overviews’ generative AI to summarize the web, which users have panned for embarrassing gaffes like suggesting they squeeze glue on pizza.

Microsoft is not only publishing similar AI summaries, but also enabling users to explore topics by conversing with Copilot, the AI chatbot from Bing. Though Google has tested ads in a precursor to AI Overviews, Microsoft is so far ahead—displaying more ads and disclosing more about how they are doing.

In a webinar for select ad agencies last week seen by WIRED, Microsoft’s Murray said that users click on ads in Copilot at nearly twice the rate they do for equivalent ads when they’re shown as the first ad above traditional search results, which historically is the most clicked ad. They also prefer a Copilot experience with ads than without by a slim margin.

Sainsbury-Carter says to her, the data mean users are finding Copilot ads more integral than tacky. She adds that clicks on multimedia ads, specifically, were three times higher in Copilot than elsewhere in Bing between last July and this past January. The company declined to share specific figures but described the measure as statistically significant.

Opted-In to AI

Advertisers don’t have much choice about investing in AI search. Microsoft and Google are pulling from customers’ existing ad campaigns for other environments to fill the ad slots in Copilot and Overviews until more data is gathered on their effectiveness. That means Copilot can draw on advertisers’ content to show ads as simple text, a row of product images, sponsored links embedded within AI summarization, or multimedia widgets for booking travel or deciding which car to buy.

“We’re still in a place where we don’t feel like asking advertisers to adopt, launch, manage, and optimize an entirely new campaign type,” Microsoft’s Sainsbury-Carter says. “Certainly that could happen over time if it feels like it’s really bifurcating and the differences are great enough.”

I Spent a Week Eating Discarded Restaurant Food. But Was It Really Going to Waste?

It’s 10 pm on a Wednesday night and I’m standing in Blessed, a south London takeaway joint, half-listening to a fellow customer talking earnestly about Jesus. I’m nodding along, trying to pay attention as reggae reverberates around the small yellow shop front. But really, all I can really think about is: What’s in the bag?

Today’s bag is blue plastic. A smiling man passes it over the counter. Only once I extricate myself from the religious lecture and get home do I discover what’s inside: Caribbean saltfish, white rice, vegetables, and a cup of thick, brown porridge.

All week, I’ve lived off mysterious packages like this one, handed over by cafés, takeaways, and restaurants across London. Inside is food once destined for the bin. Instead, I’ve rescued it using Too Good To Go, a Danish app that is surging in popularity, selling over 120 million meals last year and expanding fast in the US. For five days, I decided to divert my weekly food budget to eat exclusively through the app, paying between £3 and £6 (about $4 to $8) for meals that range from a handful of cakes to a giant box of groceries, in an attempt to understand what a tech company can teach me about food waste in my own city.

Users who open the TGTG app are presented with a list of establishments that either have food going spare right now or expect to in the near future. Provided is a brief description of the restaurant, a price, and a time slot. Users pay through the app, but this is not a delivery service. Surprise bags—customers have only a vague idea of what’s inside before they buy—have to be collected in person.

I start my experiment at 9:30 on a Monday morning, in the glistening lobby of the Novotel Hotel, steps away from the River Thames. Of all the breakfast options available the night before, this was the most convenient—en route to my office and offering a pickup slot that means I can make my 10 am meeting. When I say I’m here for TGTG, a suited receptionist nods and gestures toward the breakfast buffet. This branch of the Novotel is a £200-a-night hotel, yet staff do not seem begrudging of the £4.50 entry fee I paid in exchange for leftover breakfast. A homeless charity tells me its clients like the app for precisely that reason; cheap food, without the stigma. A server politely hands over my white-plastic surprise bag with two polystyrene boxes inside, as if I am any other guest.

I open the boxes in my office. One is filled with mini pastries, while the other is overflowing with Full English. Two fried eggs sit atop a mountain of scrambled eggs. Four sausages jostle for space with a crowd of mushrooms. I diligently start eating—a bite of cold fried egg, a mouthful of mushrooms, all four sausages. I finish with a croissant. This is enough to make me feel intensely full, verging on sick, so I donate the croissants to the office kitchen and tip the rest into the bin. This feels like a disappointing start. I am supposed to be rescuing waste food, not throwing it away.

Over the next two days, I live like a forager in my city, molding my days around pickups. I walk and cycle to cafés, restaurants, markets, supermarkets; to familiar haunts and places I’ve never noticed. Some surprise bags last for only one meal, others can be stretched out for days. On Tuesday morning, my £3.59 surprise bag includes a small cake and a slightly stale sourdough loaf, which provides breakfast for three more days. When I go back to the same café the following week, without using the app, the loaf alone costs £6.95.

TGTG was founded in Copenhagen in 2015 by a group of Danish entrepreneurs who were irked by how much food was wasted by all-you-can-eat buffets. Their idea to repurpose that waste quickly took off, and the app’s remit expanded to include restaurants and supermarkets. A year after the company was founded, Mette Lykke was sitting on a bus when a woman showed her the app and how it worked. She was so impressed, she reached out to the company to ask if she could help. Lykke has now been CEO for six years.

“I just hate wasting resources,” she says. “It was just this win-win-win concept.” To her, the restaurants win because they get paid for food they would have otherwise thrown away; the customer wins because they get a good deal while simultaneously discovering new places; and the environment wins because, she says, food waste contributes 10 percent of our global greenhouse gas emissions. When thrown-away food rots in a landfill, it releases methane into the atmosphere—with homes and restaurants the two largest contributors.

But the app doesn’t leave me with the impression I’m saving the planet. Instead, I feel more like I’m on a daily treasure hunt for discounted food. On Wednesday, TGTG leads me to a railway arch which functions as a depot for the grocery delivery app Gorillas. Before I’ve even uttered the words “Too Good To Go,” a teenager with an overgrown fringe emerges silently from the alleys of shelving units with this evening’s bag: groceries, many still days away from expiring, that suspiciously add up to create an entire meal for two people. For £5.50, I receive fresh pasta, pesto, cream, bacon, leeks, and a bag of stir-fry vegetables, which my husband merges into a single (delicious) pasta dish. It feels too convenient to be genuine waste. Perhaps Gorillas is attempting to convert me into its own customer? When I ask its parent company, Getir, how selling food well in date helps combat food waste, the company does not reply to my email.

I am still thinking about my Gorillas experience at lunchtime on Thursday as I follow the app’s directions to the Wowshee falafel market stall, where 14 others are already queuing down the street. A few casual conversations later, I realize I am one of at least four TGTG users in the line. Seeing so many of us in one place again makes me wonder if restaurants are just using the app as a form of advertising. But Wowshee owner Ahmed El Shimi describes the marketing benefits as only a “little bonus.” For him, the app’s main draw is it helps cut down waste. “We get to sell the product that we were going to throw away anyway,” he says. “And it saves the environment at the same time.” El Shimi, who says he sells around 20 surprise bags per day, estimates using TGTG reduces the amount of food the stall wastes by around 60 percent. When I pay £5 for two portions of falafel—which lasts for lunch and dinner—the business receives £3.75 before tax, El Shimi says. “It’s not much, but it’s better than nothing.”

On Friday, my final day of the experiment, everything falls apart. I sleep badly and wake up late. The loaf from earlier in the week is rock solid. I eat several mini apple pies for breakfast, which were part of a generous £3.09 Morrisons supermarket haul the night before. Browsing the app, nothing appeals to me, and even if it did I’m too tired to face leaving the house to collect it. After four days of eating nothing but waste food, I crack and seek solace in familiar ingredients buried in my cupboard: two fried eggs on my favorite brand of seeded brown bread.

TGTG is not a solution for convenience. For me, the app is an answer for office lunch malaise. It pulled me out of my lazy routine while helping me eat well—in central London—for a £5 budget. In the queue for falafel, I met a fellow app user who told me how, before she discovered the app, she would eat the same sandwich from the same supermarket for lunch every day. For people without access to a kitchen, it offers a connection to an underworld of hot food going spare.

No Matter How You Package It, Apple Intelligence Is AI

While companies like Google, Microsoft, Amazon, and others had been upfront about their efforts in AI, for years Apple had been silent. Now, finally, its executives were talking. I got an advance look one day. Eager to shed the the impression that the most innovative of the tech giants was a laggard in this vital technology moment, its software leader Craig Federighi, services czar Eddie Cue, and top researchers argued that Apple had been a leader in AI for years but just didn’t make a big deal of it. Advanced machine learning was already deep in some of its products, and we could expect more, including advances in Siri. And since Apple valued data security more than competitors, its AI efforts would be distinguished by exacting privacy standards. How many people are working on AI at Apple, I asked. “A lot,” Federighi told me. Another executive emphasized that while AI could be transformative, Apple wanted nothing to do with the woo-woo aspects that excited some in the field, including the pursuit of superintelligence. “It’s a technique that will ultimately be a very Apple way of doing things,” said one executive.

That conversation took place eight years ago, when the technology du jour was deep learning AI. But a year after that, a groundbreaking advance called Transformers led to a new wave of smart software called generative AI, which powered OpenAI’s groundbreaking ChatGPT. In an instant, people started judging tech companies by how aggressively they jumped on the trend. OpenAI’s rivals were quick to act. Apple, not so much. Many of its best AI scientists had been working on self-driving cars or its expensive mixed-reality Vision Pro headset. In the last year or so, Apple pulled its talent from such projects—no more autonomous cars—and instead came up with its own gen-AI strategy. And at this week’s Worldwide Developers Conference, Apple revealed what it was up to.

Uncharacteristically, for such an event, the news was less about products than Apple’s declaration that when it comes to gen AI, we’re on it. In an interview after the keynote, CEO Tim Cook explained the anomaly. “It became clear that people wanted to know our views of generative AI in particular,” he said. But just as in 2016, there was a cautionary note: While the company would now embrace generative AI, it would do it in a very Apple way. The company refused to even label its technology as artificial intelligence. Instead, it coined the phrase Apple Intelligence, a made-up technical name whose purpose seems to distance Apple from the scary aspects of this powerful tech wave. Apple isn’t interested in pursuing the singularity or making the movie Her come to life. It’s using this new tool to enhance productivity and creativity, and just as with past intimidating technologies, Apple-izing AI will make it go down easy.

The approach is well timed. I date the age of generative AI from the November 2022 release of ChatGPT. We spent all of 2023 trying to absorb what it meant, and a lot of people are now experiencing a rejection impulse. They’re repelled by AI’s hallucinations and angry at the prospect of lost jobs. And most people still haven’t figured out what AI can actually do for them. In 2024, smart companies have been concentrating on how this jaw-dropping technology can actually be put to use in prosaic scenarios. Apple proclaimed, “AI for the rest of us.” (The one time the letters “AI” were used in the keynote.) It was a conscious invocation of the original Macintosh slogan. Presumably, Apple will spread AI to the masses in the same way it promulgated the graphical user interface with the Mac.

In contrast to that great ambition, the products Apple touted during the keynote weren’t exactly revolutionary. A lot of the the demos involved summarizing, transcribing, auto-completing emails, organizing inboxes, writing paragraphs from prompts, and zapping photo-bombers from images. Those are table stakes for the gen-AI era. Apple’s pitch, as always, is that it will offer these advances organically woven into your normal workflow so you’ll actually use those features and be delighted by them. Apple has also come up with some nice twists in these products. Its Photos app promises a deeper search capability, using AI to figure out what a picture shows and who’s in it to search for specific images from vague prompts. In automatically generated email replies, Apple could ask you in certain cases a simple question, answerable by a single click—do you actually want to meet this person and when?—and then spin off a response that reflects your intent. More significantly, because users in Apple’s ecosystem have a wealth of personal information on their phones and computers, Apple’s AI can use that data to deliver relevant output while keeping those details onboard the devices, protecting users’ privacy. Apple SVP Federighi—still on the case—describes it as “intelligence that understands you.” (Apple even claims it will use outside investigators to verify that the data is indeed secure.)

The most interesting of the Apple announcements involved its AI assistant, Siri, which has been looking like an antique in the age of generative AI. Apple promised that in the future—maybe 2025?—Siri would not only become a better conversationalist but also could be a uniquely powerful personal assistant by performing complex requests involving multiple apps. Ironically, this was the vision of the original Siri team in 2011, overruled by Steve Jobs in the pursuit of simplicity—and because the underlying technology just wasn’t ready. “This is the exact missing link from the original Siri,” says Dag Kittlaus, who was in charge of that team when Apple launched the product. Kittlaus and some key colleagues later attempted to fulfill the vision with a startup called Viv, which now lives on as a Samsung product called Bixby. In order for a complex system like this to work, it’s imperative to get a critical mass of developers to sign on. The WWDC program included sessions that instructed developers how to make their apps work with Siri.