Banks Are Finally Realizing What Climate Change Will Do to Housing

Clean energy firms are reaping the rewards of this emerging shift. Aira, a Swedish firm that carries out heat pump installations, recently announced that it had struck a deal valued at €200 million ($214 million) for loan commitments from the bank BNP Paribas. This will allow Aira customers in Germany to pay for their heat pumps in installments.

“Banks and financial institutions have a huge responsibility to accelerate the energy transition,” says Eirik Winter, BNP Paribas’ CEO in the Nordic region. That the financing arrangement could also boost property values is a “positive side effect,” he adds.

Home renovations and energy retrofits are not cheap. Loans are often necessary to lower the barrier to entry sufficiently for consumers. Lisa Cooke works for MCS, a body that accredits heat pumps and installers in the UK. She was able to afford a heat pump herself, she says, thanks only to a government grant and just under £5,000 ($6,300) of financing from Aira. “That’s really what has made it achievable for me,” she says. “Even with savings, I wouldn’t have been able to do it otherwise.”

Luca Bertalot, secretary general of the European Mortgage Federation—European Covered Bond Council, says there are huge risks to economic productivity if people can’t secure homes that protect them from the worst effects of climate change. In heat waves, he notes, worker productivity falls, meaning a negative impact on GDP. Conversely, he speaks of a kind of energy retrofit butterfly effect. If people make their home cheaper to cool or heat, perhaps they will save money, which they may spend on other things—their children’s education, say, which in turn improves their children’s chances of a comfortable life (and maybe of buying a climate-safe home themselves) in the future.

But there is still, perhaps, a sluggishness to recognize the storm that is coming. Energy efficiency does little to protect properties from the sharper effects of climate change—stronger storms, rising seas, wildfires, and floods. As governments become unable to cover the costs of these disasters, lenders and insurers will likely end up exposed to the risks. The US National Flood Insurance Program, for instance, is already creaking under the weight of rising debt.

“As the damages pile up, it could well be that the markets will become more efficient and the incentives [to harden properties] become stronger—because nobody’s bailing you out anymore,” says Ralf Toumi at Imperial College London, who consults for insurance firms.

Ultimately, climate change impacts on housing will force some to move elsewhere, suggests Burt. Given the irrevocability of some scenarios, such as coastal villages that could be lost to the sea, or communities that become doomed to endless drought, there are some assets that no amount of hardening or retrofit will ever save. The structural utility of these properties will, like water in a drying oasis, simply evaporate.

To lessen the burden on people who are most at risk of losing their home to climate change, affordable loans might one day be targeted at consumers in these areas to help them move to safer places, says Burt. Lenders who don’t take this approach, and who continue offering mortgages on homes destined to succumb to climate change, may soon rue the day. “If you’re trying to support those markets,” Burt says, “you’re throwing good money after bad.”

STEM Students Refuse to Work at Google and Amazon Over Project Nimbus

More than 1,100 self-identified STEM students and young workers from more than 120 universities have signed a pledge to not take jobs or internships at Google or Amazon until the companies end their involvement in Project Nimbus, a $1.2 billion contract providing cloud computing services and infrastructure to the Israeli government.

The pledgers included undergraduate and graduate students from Stanford, UC Berkeley, the University of San Francisco, and San Francisco State University. Some students from those schools also participated in an anti–Project Nimbus rally on Wednesday outside Google’s San Francisco office with tech workers and activists.

Amazon and Google are top employers for graduates from top STEM schools, according to data from career service College Transitions, which was compiled using publicly available data from LinkedIn. According to the data, as of 2024, 485 UC Berkeley graduates and 216 Stanford graduates work at Google.

The pledge, which marks the latest backlash against Google and Amazon, was organized by No Tech for Apartheid (NOTA), a coalition of tech workers and activists from Muslim grassroots movement MPower Change and advocacy group Jewish Voice for Peace. Since 2021, NOTA has advocated for Google and Amazon to boycott and divest from Project Nimbus and any other work for the Israeli government.

“Palestinians are already harmed by Israeli surveillance and violence,” the pledge reads. “By expanding public cloud computing capacity and providing their state of the art technology to the Israeli occupation’s government and military, Amazon and Google are helping to make Israeli apartheid more efficient, more violent, and even deadlier for Palestinians.”

Sam, who asked to be identified only by his first name for fear of professional repercussions, says that he signed the letter as a 2023 graduate of Cornell University’s master’s program for computer science and recent member of the tech workforce.

He tells WIRED that he was moved to act after watching friends from graduate school who “think one way privately,” but then “went on to take careers in these Big Tech firms.”

“I know a lot of people who—not to say they have a price, but when somebody looks at a starting salary, it’ll test your principles a little bit,” Sam said.

Naomi Hardy-Njie, a communications major and computer science minor at the University of San Francisco, said she heard about the letter while participating at the school’s three-week encampment demanding disclosure and divestment from companies funding the war in Gaza.

Hardy-Njie said that she signed the letter because Google and Amazon executives have been reticent to address protesters’ demands. But change, she said, “has to start from the bottom up.”

NOTA has organized several actions targeting Project Nimbus over the past several months. Eddie Hatfield, a NOTA organizer, was fired from Google in March after he interrupted the Google Israel managing director at a Google-sponsored tech conference in New York. More than 50 Google workers were later fired following a sit-in protest against Project Nimbus in Google’s New York and Sunnyvale offices, which was also organized by NOTA.

Google has claimed that Project Nimbus is “not directed” at classified or military work, but various document leaks have tied the contract to work for Israel’s military. Google and Amazon did not immediately respond to WIRED’s request for comment.

Before Smartphones, an Army of Real People Helped You Find Stuff on Google

The Eiffel Tower is 330 meters tall, and the nearest pizza parlor is 1.3 miles from my house. These facts were astoundingly easy to ascertain. All I had to do was type some words into Google, and I didn’t even have to spell them right.

For the vast majority of human history, this is not how people found stuff out. They went to the library, asked a priest, or wandered the streets following the scent of pepperoni. But then, for a brief period when search engines existed but it was too expensive to use them on your shiny new phone, people could call or text a stranger and ask them anything.

The internet first became available on cell phones in 1996, but before affordable data plans, accidentally clicking the browser icon on your flip phone would make you sweat. In the early 2000s, accessing a single website could cost you as much as a cheeseburger, so not many people bothered to Google on the go.

Instead, a variety of services sprang up offering mobile search without the internet. Between 2007 and 2010, Americans could call GOOG-411 to find local businesses, and between 2006 and 2016, you could text 242-242 to get any question answered by the company ChaCha. Brits could call 118 118 or text AQA on 63336 for similar services. Behind the scenes, there were no artificially intelligent robots answering these questions. Instead, thousands of people were once employed to be Google.

“Some guy phoned up and asked if Guinness was made in Ireland, people asked for the circumference of the world,” says Hayley Banfield, a 42-year-old from Wales who answered 118 118 calls from 2004 to 2005. The number was first launched in 2002 as a directory enquiries service—meaning people could call up to find out phone numbers and addresses (back then calls cost an average of 55 pence). In 2008, the business started offering to answer any questions. Although Banfield worked for 118 118 before this change, customers would ask her anything and everything regardless. “We had random things like ‘How many yellow cars are on the road?’”

While directory enquiry lines still exist, Banfield worked during their boom—she answered hundreds of calls in her 5:30 pm to 2 am shifts—and quickly noticed patterns in people’s queries. “Anything past 11 pm, that’s when the drunk calls would come in,” she says. People wanted taxis and kebab shops but were so inebriated that they’d forget to finish their sentences. Sometimes, callers found Banfield so helpful that they invited her to join them on their nights out. As the evening crept on, callers asked for massage parlors or saunas—then they would call back irate after Banfield recommended an establishment that didn’t meet their needs.

The “pizza hours” were 8 pm to 10 pm—everyone wanted the number for their local takeout. Banfield had a computer in front of her in the Cardiff call center, loaded with a simple database. She’d type in a postcode (she had memorized all of the UK’s as part of her training) and then use a shortcut such as “PIZ” for pizza or “TAX” for taxi. People sometimes accused Banfield of being psychic, but if the power had gone out in a certain area, she automatically knew that most callers wanted to know why.

My Memories Are Just Meta’s Training Data Now

In R. C. Sherriff’s novel The Hopkins Manuscript, readers are transported to a world 800 years after a cataclysmic event ended Western civilization. In pursuit of clues about a blank spot in their planet’s history, scientists belonging to a new world order discover diary entries in a swamp-infested wasteland formerly known as England. For the inhabitants of this new empire, it is only through this record of a retired school teacher’s humdrum rural life, his petty vanities and attempts to breed prize-winning chickens, that they begin to learn about 20th-century Britain.

If I were to teach futuristic beings about life on earth, I once believed I could produce a time capsule more profound than Sherriff’s small-minded protagonist, Edgar Hopkins. But scrolling through my decade-old Facebook posts this week, I was presented with the possibility that my legacy may be even more drab.

Earlier this month, Meta announced that my teenage status updates were exactly the kind of content it wants to pass on to future generations of artificial intelligence. From June 26, old public posts, holiday photos, and even the names of millions of Facebook and Instagram users around the world would effectively be treated as a time capsule of humanity and transformed into training data.

That means my mundane posts about university essay deadlines (“3 energy drinks down 1,000 words to go”) as well as unremarkable holiday snaps (one captures me slumped over my phone on a stationary ferry) are about to become part of that corpus. The fact that these memories are so dull, and also very personal, makes Meta’s interest more unsettling.

The company says it is only interested in content that is already public: private messages, posts shared exclusively with friends, and Instagram Stories are out of bounds. Despite that, AI is suddenly feasting on personal artifacts that have, for years, been gathering dust in unvisited corners of the internet. For those reading from outside Europe, the deed is already done. The deadline announced by Meta applied only to Europeans. The posts of American Facebook and Instagram users have been training Meta AI models since 2023, according to company spokesperson Matthew Pollard.

Meta is not the only company turning my online history into AI fodder. WIRED’s Reece Rogers recently discovered that Google’s AI search feature was copying his journalism. But finding out which personal remnants exactly are feeding future chatbots was not easy. Some sites I’ve contributed to over the years are hard to trace. Early social network Myspace was acquired by Time Inc. in 2016, which in turn was acquired by a company called Meredith Corporation two years later. When I asked Meredith about my old account, they replied that Myspace had since been spun off to an advertising firm, Viant Technology. An email to a company contact listed on its website was returned with a message that the address “couldn’t be found.”

Asking companies still in business about my old accounts was more straightforward. Blogging platform Tumblr, owned by WordPress owner Automattic, said unless I’d opted out, the public posts I made as a teenager will be shared with “a small network of content and research partners, including those that train AI models” per a February announcement. YahooMail, which I used for years, told me that a sample of old emails—which have apparently been “anonymized” and “aggregated”—are being “utilized” by an AI model internally to do things like summarize messages. Microsoft-owned LinkedIn also said my public posts were being used to train AI although some “personal” details included in those posts were excluded, according to a company spokesperson, who did not specify what those personal details were.

Europe Scrambles for Relevance in the Age of AI

That concentration of power is uncomfortable for European governments. It makes European companies downstream customers of the future, importing the latest services and technology in exchange for money and data sent westward across the Atlantic. And these concerns have taken on a new urgency—partly because some in Brussels perceive a growing gap in values and beliefs between Silicon Valley and the median EU citizen and their elected representatives; and partly because AI looms large in the collective imagination as the engine of the next technological revolution.

European fears of lagging in AI predate ChatGPT. In 2018, the European Commission issued an AI plan calling for “AI made in Europe” that could compete with the US and China. But beyond a desire for some kind of control over the shape of technology, the operational definition of AI sovereignty has become pretty fuzzy. “For some people, it means we need to get our act together to fight back against Big Tech,” Daniel Mügge, professor of political arithmetic at the University of Amsterdam, who studies technology policy in the EU, says. “To others, it means there’s nothing wrong with Big Tech, as long as it’s European, so let’s get cracking and make it happen.”

Those competing priorities have begun to complicate EU regulation. The bloc’s AI Act, which passed the European Parliament in March and is likely to become law this summer, has a heavy focus on regulating potential harms and privacy concerns around the technology. However, some member states, notably France, made clear during negotiations over the law that they fear regulation could shackle their emerging AI companies, which they hope will become European alternatives to OpenAI.

Speaking before last November’s UK summit on AI safety, French finance minister Bruno Le Maire said that Europe needed to “innovate before it regulates” and that the continent needed “European actors mastering AI.” The AI Act’s final text includes a commitment to making the EU “a leader in the uptake of trustworthy AI.”

“The Italians and the Germans and the French at the last minute thought: ‘Well, we need to cut European companies some slack on foundation models,’” Mügge says. “That is wrapped up in this idea that Europe needs European AI. Since then, I feel that people have realized that this is a little bit more difficult than they would like.”

Sarlin, who has been on a tour of European capitals recently, including meeting with policymakers in Brussels, says that Europe does have some of the elements it needs to compete. To be a player in AI, you have to have data, computing power, talent, and capital, he says.

Data is fairly widely available, Sarlin adds, and Europe has AI talent, although it sometimes struggles to retain it.

To marshal more computing power, the EU is investing in high-performance computing resources, building a pan-European network of high-performance computing facilities, and offering startups access to supercomputers via its “AI Factories” initiative.

Accessing the capital needed to build big AI projects and companies is also challenging, with a wide gulf between the US and everyone else. According to Stanford University’s AI Index report, private investment in US AI companies topped $67 billion in 2023, more than 35 times the amount invested in Germany or France. Research from Accel Partners shows that in 2023, the seven largest private investment rounds by US generative AI companies totaled $14 billion. The top seven in Europe totaled less than $1 billion.

We’re Still Waiting for the Next Big Leap in AI

When OpenAI announced GPT-4, its latest large language model, last March, it sent shockwaves through the tech world. It was clearly more capable than anything seen before at chatting, coding, and solving all sorts of thorny problems—including school homework.

Anthropic, a rival to OpenAI, announced today that it has made its own AI advance that will upgrade chatbots and other use cases. But although the new model is the world’s best by some measures, it’s more of a step forward than a big leap.

Anthropic’s new model, called Claude 3.5 Sonnet, is an upgrade to its existing Claude 3 family of AI models. It is more adept at solving math, coding, and logic problems as measured by commonly used benchmarks. Anthropic says it is also a lot faster, better understands nuances in language, and even has a better sense of humor.

That’s no doubt useful to people trying to build apps and services on top of Anthropic’s AI models. But the company’s news is also a reminder that the world is still waiting for another AI leap forward in AI akin to that delivered by GPT-4.

Expectation has been building for OpenAI to release a sequel called GPT-5 for more than a year now, and the company’s CEO, Sam Altman, has encouraged speculation that it will deliver another revolution in AI capabilities. GPT-4 cost more than $100 million to train, and GPT-5 is widely expected to be much larger and more expensive.

Although OpenAI, Google, and other AI developers have released new models that out-do GPT-4, the world is still waiting for that next big leap. Progress in AI has lately become more incremental and more reliant on innovations in model design and training rather than brute-force scaling of model size and computation, as GPT-4 did.

Michael Gerstenhaber, head of product at Anthropic, says the company’s new Claude 3.5 Sonnet model is larger than its predecessor but draws much of its new competence from innovations in training. For example, the model was given feedback designed to improve its logical reasoning skills.

Anthropic says that Claude 3.5 Sonnet outscores the best models from OpenAI, Google, and Facebook in popular AI benchmarks including GPQA, a graduate-level test of expertise in biology, physics, and chemistry; MMLU, a test covering computer science, history, and other topics; and HumanEval, a measure of coding proficiency. The improvements are a matter of a few percentage points though.

This latest progress in AI might not be revolutionary but it is fast-paced: Anthropic only announced its previous generation of models three months ago. “If you look at the rate of change in intelligence you’ll appreciate how fast we’re moving,” Gerstenhaber says.

More than a year after GPT-4 spurred a frenzy of new investment in AI, it may be turning out to be more difficult to produce big new leaps in machine intelligence. With GPT-4 and similar models trained on huge swathes of online text, imagery, and video, it is getting more difficult to find new sources of data to feed to machine-learning algorithms. Making models substantially larger, so they have more capacity to learn, is expected to cost billions of dollars. When OpenAI announced its own recent upgrade last month, with a model that has voice and visual capabilities called GPT-4o, the focus was on a more natural and humanlike interface rather than on substantially more clever problem-solving abilities.

Adobe Says It Won’t Train AI Using Artists’ Work. Creatives Aren’t Convinced

When users first found out about Adobe’s new terms of service (which were quietly updated in February), there was an uproar. Adobe told users it could access their content “through both automated and manual methods” and use “techniques such as machine learning in order to improve [Adobe’s] Services and Software.” Many understood the update as the company forcing users to grant unlimited access to their work, for purposes of training Adobe’s generative AI, known as Firefly.

Late on Tuesday, Adobe issued a clarification: In an updated version of its terms of service agreement, it pledged not to train AI on its users’ content stored locally or in the cloud and gave users the option to opt out of content analytics.

Caught in the crossfire of intellectual property lawsuits, the ambiguous language used to previously update the terms shed light on a climate of acute skepticism among artists, many of whom overrely on Adobe for their work. “They already broke our trust,” says Jon Lam, a senior storyboard artist at Riot Games, referring to how award-winning artist Brian Kesinger discovered generated images in the style of his art being sold under his name on Adobe’s stock image site, without his consent. Earlier this month, the estate of late photographer Ansel Adams publicly scolded Adobe for allegedly selling generative AI imitations of his work.

Scott Belsky, Adobe’s chief strategy officer, had tried to assuage concerns when artists started protesting, clarifying that machine learning refers to the company’s non-generative AI tools—Photoshop’s “Content Aware Fill” tool, which allows users to seamlessly remove objects in an image, is one of the many tools done through machine learning. But while Adobe insists that the updated terms do not give the company content ownership and that it will never use user content to train Firefly, the misunderstanding triggered a bigger discussion about the company’s market monopoly and how a change like this could threaten the livelihoods of artists at any point. Lam is among the artists who still believe that, despite Adobe’s clarification, the company will use work created on its platform to train Firefly without the creators’ consent.

The nervousness over nonconsensual use and monetization of copyrighted work by generative AI models is not new. Early last year, artist Karla Ortiz was able to prompt images of her work using her name on various generative AI models, an offense that gave rise to a class action lawsuit against Midjourney, DeviantArt, and Stability AI. Ortiz was not alone—Polish fantasy artist Greg Rutkowski found that his name was one of the most commonly used prompts in Stable Diffusion when the tool first launched in 2022.

As the owner of Photoshop and creator of PDFs, Adobe has reigned as the industry standard for over 30 years, powering the majority of the creative class. An attempt to acquire product design company Figma was blocked and abandoned in 2023 for antitrust concerns attesting to its size.

Airbnb’s Olympics Push Could Help It Win Over Paris

Short-term rentals can function as a quick release valve for a city expecting an influx of visitors, increasing capacity for a short time nearly instantly. In fact, despite the usual hype around the Olympic Games, there are still many places to stay in Paris this summer.

A search on Airbnb for a two-person stay during the first weekend of the games returned more than 1,000 results, with many charging less than $200 a night. A search for hotel rooms on Expedia turned up only around 20 hotels offering similarly low rates. Hotel prices for the dates of the Olympics have actually fallen in Paris since December, but remain higher than those for the same time last summer, with the average cost of a hotel room during the opening weekend of the games going for around €440 as of May.

Booking rates for short-term rentals during the Olympics are up by 8 percent compared to the dates two weeks before the games across all locations hosting Olympic events, but the number of available rooms has increased by 38 percent, according to AirDNA, a third-party platform that tracks short-term rentals.

The average price in Paris for a short-term rental during the Olympics is $481 a night, while those who booked earlier paid an average of $350. Outside of Paris, rates average $289, up from a previous $198. The “vast majority” of these listings on Airbnb, says Stephenson, come from families listing their primary homes. But other Parisians are begging travelers to stay away, warning that the games will bring chaos, and some are planning to flee the city.

People from more than 160 countries and regions have booked stays on Airbnb for the Olympics, according to the company. The largest influx of tourists comes from the US, with American travelers making up 20 percent of the bookings, and many other guests coming from the UK, Germany, and the Netherlands.

Against that background, and with Airbnb’s marketing push, Jamie Lane, chief economist and senior vice president of research of AirDNA, says it makes sense that more people are signing up with Airbnb to host. “Everyone starts getting Olympic fever,” he says, especially “with Airbnb doing more and more ads and market outreach within the city of Paris.”

Despite the flood of visitors, the ready availability of vacancies suggests that like many athletes competing in Paris, some Airbnb hosts will end the games with disappointment as their listings remained unbooked. But Lane says that in the past, large events have been seen to provide a lasting boost to Airbnb’s footprint in a place. “A city is left with more listings than it had going in,” Lane says. For “people that maybe decide to do it for the first time, it ends up being a good experience. It was very little work. They think: ‘I should do this again.’”

OpenAI-Backed Nonprofits Have Gone Back on Their Transparency Pledges

Neither database mandates nor generally contains up-to-date versions of the records that UBI Charitable and OpenResearch had said they provided in the past.

The original YC Research conflict-of-interest policy that Das did share calls for company insiders to be upfront about transactions in which their impartiality could be questioned and for the board to decide how to proceed.

Das says the policy “may have been amended since OpenResearch’s policies changed (including when the name was changed from YC Research), but the core elements remain the same.”

No Website

UBI Charitable launched in 2020 with $10 million donated from OpenAI, as first reported by TechCrunch last year. UBI Charitable’s aim, according to its government filings, is putting the over $31 million it received by the end of 2022 to support initiatives that try to offset “the societal impacts” of new technologies and ensure no one is left behind. It has donated largely to CitySquare in Dallas and Heartland Alliance in Chicago, both of which work on a range of projects to fight poverty.

UBI Charitable doesn’t appear to have a website but shares a San Francisco address with OpenResearch and OpenAI, and OpenAI staff have been listed on UBI Charitable’s government paperwork. Its three Form 990 filings since launching all state that records including governing documents, financial statements, and a conflict-of-interest policy were available upon request.

Rick Cohen, chief operating and communications officer for National Council of Nonprofits, an advocacy group, says “available upon request” is a standard answer plugged in by accounting firms. OpenAI, OpenResearch, and UBI Charitable have always shared the same San Francisco accounting firm, Fontanello Duffield & Otake, which didn’t respond to a request for comment.

Miscommunication or poor oversight could lead to the standard answer about access to records getting submitted, “even if the organization wasn’t intending to make them available,” Cohen says.

The disclosure question ended up on what’s known as the Form 990 as part of an effort in 2008 to help the increasingly complex world of nonprofits showcase their adherence to governance best practices, at least as implied by the IRS, says Kevin Doyle, senior director of finance and accountability at Charity Navigator, which evaluates nonprofits to help guide donors’ giving decisions. “Having that sort of transparency story is a way to indicate to donors that their money is going to be used responsibly,” Doyle says.

OpenResearch solicits donations on its website, and UBI Charitable stated on its most recent IRS filing that it had received over $27 million in public support. Doyle says Charity Navigator’s data show donations tend to flow to organizations it rates higher, with transparency among the measured factors.

It’s certainly not unheard of for organizations to share a wide range of records. Charity Navigator has found that most of the roughly 900 largest US nonprofits reliant on individual donors publish financial statements on their websites. It doesn’t track disclosure of bylaws or conflict-of-interest policies.

Charity Navigator publishes its own audited financial statements and at least eight nonstandard policies it maintains, including ones on how long it retains documents, how it treats whistleblower complaints, and which gifts staff can accept. “Donors can look into what we’re doing and make their own judgment rather than us operating as a black box, saying, ‘Please give us money, but don’t ask any questions,’” Doyle says.

Cohen of the National Council of Nonprofits cautions that over-disclosure could create vulnerabilities. Posting a disaster-recovery plan, for example, could offer a roadmap to computer hackers. He adds that just because organizations have a policy on paper doesn’t mean they follow it. But knowing what they were supposed to do to evaluate a potential conflict of interest could still allow for more public accountability than otherwise possible, and if AI could be as consequential as Altman envisions, the scrutiny may very well be needed.

Orkut’s Founder Is Still Dreaming of a Social Media Utopia

Before Orkut launched in January 2004, Büyükkökten warned the team that the platform he’d built it on could handle only 200,000 users. It wouldn’t be able to scale. “They said, let’s just launch and see what happens,” he explains. The rest is online history. “It grew so fast. Before we knew it, we had millions of users,” he says.

Orkut featured a digital Scrapbook and the ability to give people compliments (ranging from “trustworthy” to “sexy”), create communities, and curate your very own Crush List. “It reflected all of my personality traits. You could flatter people by saying how cool they were, but you could never say something negative about them,” he says.

At first, Orkut was popular in the US and Japan. But, as predicted, server issues severed its connection to its users. “We started having a lot of scalability issues and infrastructure problems,” Büyükkökten says. They were forced to rewrite the entire platform using C++, Java, and Google’s tools. The process took an entire year, and scores of original users dropped off due to sluggish speeds and one-too-many encounters with Orkut’s now-nostalgic “Bad, bad server, no donut for you” error message.

Around this time, though, the site became incredibly popular in Finland. Büyükkökten was bemused. “I couldn’t figure it out until I spoke to a friend who speaks Finnish. And he said: ‘Do you know what your name means?’ I didn’t. He told me that orkut means multiple orgasms.” Come again? “Yes, so in Finland, everyone thought they were signing up to an adult site. But then they would leave straight after as we couldn’t satisfy them,” he laughs.

Awkward double meanings aside, Orkut continued to spread across the world. In addition to exploding in Estonia, the platform went mega in India. Its true second home, though, was Brazil. “It became a huge success. A lot of people think I’m Brazilian because of this,” Büyükkökten explains. He has a theory about why Brazil went nuts for Orkut. “Brazil’s culture is very welcoming and friendly. It’s all about friendships and they care about connections. They’re also very early adopters of technology,” he says. At its peak, 11 million of Brazil’s 14 million internet users were on Orkut, most logging on through cybercafes. It took Facebook seven years to catch up.
But Orkut wasn’t without its problems (and many fake profiles). The site was banned in Iran and the United Arab Emirates. Government authorities in Brazil and India had concerns about drug-related content and child pornography, something Büyükkökten denies existed on Orkut. Brazilians coined the word orkutização to describe a social media site like Orkut becoming less cool after going mainstream. In 2014, having hemorrhaged users due to slow server speeds, Facebook’s more intuitive interface, and issues surrounding privacy, Orkut went offline. “Vic Gundotra, in charge of Google+, decided against having any competing social products,” Büyükkökten explains.

But Büyükkökten has fond memories. “We had so many stories of people falling in love and moving in together from different parts of the world. I have a friend in Canada who met his wife in Brazil through Orkut, a friend in New York who met his wife in Estonia and now they’re married with two kids.” he says. It also provided a platform for minority communities. “I was talking to a gay journalist from a small town in São Paulo who told me that finding all these LGBTQ people on Orkut transformed his life,” he adds.

Büyükkökten left Google in 2014 and founded a new social network, again featuring a simple five-letter title: Hello. He wanted to focus on positive connection. It used “loves” rather than likes, and users could choose from more than 100 personae, ranging from Cricket Fan to Fashion Enthusiast, and then were connected to like-minded people with common interests. Soft-launched in Brazil in 2018 with 2 million users, Hello enjoyed “ultra-high engagement” that Büyükkökten claims surpassed the likes of Instagram and Twitter. “One of the things that stood out in our user surveys was that people said when they open Hello, it makes them happy.”

The app was downloaded more than 2 million times—a fraction of the users Orkut enjoyed—but Büyükkökten is proud of it. “It surpassed all our dreams. There were numerous instances where our K-Factor (the number of new people that existing users bring to an app) reached 3, leading us to exponential growth,” he says. But, in 2020, Büyükkökten bid goodbye to Hello.
Now he’s working on a new platform. “It’ll leverage AI and machine learning to optimize for improving happiness, bringing people together, fostering communities, empowering users, and creating a better society,” he says. “Connection will be the cornerstone of design, interaction, product, and experience.” And the name? “If I told you the new brand, you would have an aha moment and everything would be crystal clear,” he says.

Once again, it’s driven by his enduring desire to connect people. “One of the biggest ills of society is the decline in social capital. After smartphones and the pandemic, we have stopped hanging out with our friends and don’t know our neighbors. We have a loneliness epidemic,” he says.
He is fiercely critical of current platforms. “My biggest passion in life is connecting people through technology. But when was the last time you met someone on social media? It’s creating shame, pessimism, division, depression, and anxiety,” he says. For Büyükkökten, optimism is more important than optimization. “These companies have engineered the algorithm for revenue,” he says. “But it’s been awful for mental health. The world is terrifying right now and a lot of that has come through social media. There’s so much hate,” he says.

Instead, he wants social media to be a place of love and a facilitator for meeting new people in person. But why will it work this time around? “That’s a really good question,” he says. “One thing that has been really consistent is that people miss Orkut right now.” It’s true—Brazilian social media has recently been abuzz with memes and memories to celebrate the site’s 20th birthday. “A teenage boy even recently drove 10 hours to meet me at a conference to talk about Orkut. And I was like, how is that even possible?” he laughs. Orkut’s landing page is still live, featuring an open letter calling for a social media utopia.

This, along with our collective desire for a more human social media, is what makes Büyükkökten believe that his next platform is one that will truly stick around. Has he decided on that all important name? “We haven’t announced it yet. But I’m really excited. I truly care. I want to bring that authenticity and sense of belonging back,” he concludes. Perhaps, as his Finnish fans would joke, it’s time for Orkut’s second coming.

This story first appeared in the July/August 2024 UK edition of WIRED magazine.