The Metaverse Was Supposed to Be Your New Office. You’re Still on Zoom

When Mark Zuckerberg rebranded Facebook as Meta in 2021, he estimated the metaverse could reach a billion people over a decade. Not long after, Bill Gates predicted that within two or three years “most virtual meetings will move from 2D camera image grids—which I call the Hollywood Squares model, although I know that probably dates me—to the metaverse, a 3D space with digital avatars.”

In fall 2022, Microsoft announced a partnership with Meta that would bring Mesh, a platform for collaboration in mixed reality, and its set of Microsoft 365 applications to Meta’s Quest products. Meta has launched Horizon Workrooms for meeting purposes. IT company Accenture purchased 60,000 Oculus headsets to train new workers in October 2021 and built its own metaverse, called Nth floor, which included digital twins of some of its offices, complete with cafés and legless avatars.

Still, nearly three years later, the average office worker isn’t strapping a headset to their face to meet with their colleagues. While nine out of 10 companies can identify use cases for extended reality in their organization, only one in five have invested in the tech, according to research surveying 400 large companies across multiple industries published by Omdia in February.

But this doesn’t mean the vision is dead. Rather, experts say, companies are looking for the best use cases for the metaverse. They add that the metaverse itself—at this point not a monolith but a concept fragmented across multiple virtual worlds and platforms—will need some revamping to work well for different types of employees, and the technology people use to access it must improve.

The metaverse must be built in a way that centers the needs of real people, says Anand van Zelderen, a researcher in organizational behavior and virtual reality at the University of Zurich. That means evaluating how workers feel in the metaverse and taking steps to combat loneliness that some experience as they enter virtual spaces that can’t match physical meetups. The current technology “takes people too much out of their reality, and people don’t want that for long periods of time,” van Zelderen says.

Instead, he says, the metaverse must “enhance our reality rather than replace it.” Meaning it should do more than replicate the in-person office. People could use the tech to meet in intriguing virtual locations, like mountaintops or Mars, or design virtual workplaces to meet the specific needs of their teams, he adds.

“We have an opportunity to be who we want to be, to work where we want to be, to meet in ways that we want,” van Zelderen says. “It shouldn’t be up to supervisors or tech developers to dictate how we want to experience the metaverse—give people more freedom to choose and build their work surroundings.”

Businesses, for their part, are likely to be selective in how they use virtual spaces. “Companies are trying to identify where VR actually adds value,” says Rolf Illenberger, CEO and founder of VRdirect, which focuses on VR software for enterprises. “There’s no point in using a new technology for something that’s perfectly fine in a video call.”

Plus, willingness to adopt VR tech remains an obstacle, as some people find wearing headsets unnatural and the technological learning curve steep. Even Apple’s Vision Pro headsets, which made great leaps in functionality, are not expected to sell more than 500,000 devices in the US this year.

“VR has not taken off in the last decade to the degree that people imagined it might,” says J. P. Gownder, vice president and principal analyst on the Future of Work team at research firm Forrester. “It has been replete with failure and expectations that exceeded reality for a very long time. There seems to be some level of human rejection of the technology.” Sleeker, better hardware that resembles a pair of eyeglasses could be the key to wider adoption, but the technology has yet to meet those needs.

Illenberger says he does see companies more frequently employing VR for safety training and in fields where workers take a more hands-on approach to developing products, like engineering and automotive manufacturing. UPS has used VR technology to train drivers, Fidelity has used VR for remote onboarding of employees, and Walmart has used VR to train workers in its stores.

But for some, the value of gathering in the metaverse alone has proved itself. Madaline Zannes, a Toronto-based attorney, has law offices in the metaverse. She meets with colleagues and clients in her five-floor building in the virtual world Somnium Space.

While having a presence in the metaverse has been a great networking and marketing tool for her firm, which focuses on business law as well as Web3, Zannes says it also helps to foster “more of an emotional connection with everybody,” due to the immersive nature of the platforms she uses. People can move around or emote, and being able to tap someone on the shoulder and start a conversation is far more personal than being constrained to a square on a video call in a large group.

Further development and adoption of the metaverse has been delayed in large part because business travel has resumed since the onset of Covid-19. And a year after most people heard the term metaverse for the first time, they were introduced to ChatGPT. AI became the new shiny object drawing the attention of CEOs—even if they aren’t actively training workers to use it. However, Gownder says, another shock to the business world along the lines of the pandemic could spur more rapid investment and development of virtual tech for work.

Even as Web 2.0 has descended into a disinformation and privacy nightmare, there is still time to save the metaverse from such a fate, as my colleague Megan Farokhmanesh has written. But making it work for employees will require that developers meet their needs. Until then, people will either get their butts into physical offices or further embrace the Hollywood Squares model.

OpenAI Is Testing Its Powers of Persuasion

This week, Sam Altman, CEO of OpenAI, and Arianna Huffington, founder and CEO of the health company Thrive Global, published an article in Time touting Thrive AI, a startup backed by Thrive and OpenAI’s Startup Fund. The piece suggests that AI could have a huge positive impact on public health by talking people into healthier behavior.

Altman and Huffington write that Thrive AI is working toward “a fully integrated personal AI coach that offers real-time nudges and recommendations unique to you that allows you to take action on your daily behaviors to improve your health.”

Their vision puts a positive spin on what may well prove to be one of AI’s sharpest double-edges. AI models are already adept at persuading people, and we don’t know how much more powerful they could become as they advance and gain access to more personal data.

Aleksander Madry, a professor on sabbatical from the Massachusetts Institute of Technology, leads a team at OpenAI called Preparedness that is working on that very issue.

“One of the streams of work in Preparedness is persuasion,” Madry told WIRED in a May interview. “Essentially, thinking to what extent you can use these models as a way of persuading people.”

Madry says he was drawn to join OpenAI by the remarkable potential of language models and because the risks that they pose have barely been studied. “There is literally almost no science,” he says. “That was the impetus for the Preparedness effort.”

Persuasiveness is a key element in programs like ChatGPT and one of the ingredients that makes such chatbots so compelling. Language models are trained in human writing and dialog that contains countless rhetorical and suasive tricks and techniques. The models are also typically fine-tuned to err toward utterances that users find more compelling.

Research released in April by Anthropic, a competitor founded by OpenAI exiles, suggests that language models have become better at persuading people as they have grown in size and sophistication. This research involved giving volunteers a statement and then seeing how an AI-generated argument changes their opinion of it.

OpenAI’s work extends to analyzing AI in conversation with users—something that may unlock greater persuasiveness. Madry says the work is being conducted on consenting volunteers, and declines to reveal the findings to date. But he says the persuasive power of language models runs deep. “As humans we have this ‘weakness’ that if something communicates with us in natural language [we think of it as if] it is a human,” he says, alluding to an anthropomorphism that can make chatbots seem more lifelike and convincing.

The Time article argues that the potential health benefits of persuasive AI will require strong legal safeguards because the models may have access to so much personal information. “Policymakers need to create a regulatory environment that fosters AI innovation while safeguarding privacy,” Altman and Huffington write.

This is not all that policymakers will need to consider. It may also be crucial to weigh how increasingly persuasive algorithms could be misused. AI algorithms could enhance the resonance of misinformation or generate particularly compelling phishing scams. They might also be used to advertise products.

Madry says a key question, yet to be studied by OpenAI or others, is how much more compelling or coercive AI programs that interact with users over long periods of time could prove to be. Already a number of companies offer chatbots that roleplay as romantic partners and other characters. AI girlfriends are increasingly popular—some are even designed to yell at you—but how addictive and persuasive these bots are is largely unknown.

The excitement and hype generated by ChatGPT following its release in November 2022 saw OpenAI, outside researchers, and many policymakers zero in on the more hypothetical question of whether AI could someday turn against its creators.

Madry says this risks ignoring the more subtle dangers posed by silver-tongued algorithms. “I worry that they will focus on the wrong questions,” Madry says of the work of policymakers thus far. “That in some sense, everyone says, ‘Oh yeah, we are handling it because we are talking about it,’ when actually we are not talking about the right thing.”

Pressure Grows in Congress to Treat Crypto Investigator Tigran Gambaryan, Jailed in Nigeria, as a Hostage

When Tigran Gambaryan was first invited in February to meet with the Nigerian government in order to settle a dispute with his employer, the cryptocurrency exchange Binance, Nigerian officials detained him against his will, stripped him of his passport, and told him he was a “guest” of the state. He’s since been charged with financial crimes and jailed for months as a criminal suspect.

Pressure is now mounting within the US Congress for the Biden administration to treat him as what his supporters argue he has been all along: a hostage, held illegally by an unaccountable foreign country.

On Wednesday, US congressman Rich McCormick, who represents Gambaryan’s district in his home state of Georgia, submitted a resolution to the House Committee on Foreign Affairs that both urges the Nigerian government to release Gambaryan and calls on the US government to recognize that Gambaryan is being illegally detained as a hostage in an effort to extort his employer, Binance. That resolution represents the latest in a series of growing calls from Congress for the White House to step up its pressure on Nigeria to release Gambaryan, a former federal agent who led many of the most significant cryptocurrency-related criminal cases of the last decade during his time as an IRS criminal investigator.

“The continued detention of Tigran Gambaryan in Nigeria is a clear violation of his rights, and he is simply being used as a means of extortion by the Nigerian government,” McCormick wrote in a statement. “We urge Nigeria to immediately release Tigran and provide him with the necessary medical care and due process. The United States Government must do everything in its power to secure the release of Tigran Gambaryan, and all of our citizens wrongfully detained abroad.”

McCormick’s resolution to push for Gambaryan’s release follows an earlier open letter from 16 members of Congress calling on the White House to transfer Gambaryan’s case to the Office of the Special Presidential Envoy for Hostage affairs. That letter noted that Gambaryan has suffered from malaria and pneumonia, collapsing in court during one day of his trial, yet has been denied proper care in a hospital. “Gambaryan’s health and wellbeing are in danger, and we fear for his life,” the letter read. “Immediate action is essential to ensure his safety and preserve his life. We must act swiftly before it is too late.” Two House members, French Hill and Chrissy Houlahan, visited Gambaryan in jail last month and have also called for his release.

Gambaryan and another Binance staffer, Nadeem Anjarwalla, flew to Abuja in late February at the invitation of the Nigerian government after the country’s officials accused Binance, where Gambaryan works as head of its investigations and financial crime compliance, of money laundering and contributing to the devaluation of the country’s national currency, the naira. But just days into that negotiation, the two men were detained in a government-run “guest house” against their will.

The situation escalated further when Anjarwalla, who is based in Kenya, escaped during a visit to a mosque for Ramadan prayers. Gambaryan was then criminally charged with tax evasion and money laundering—all signs suggest those charges relate to the behavior of Binance, not Gambaryan personally—and moved to Kuje prison, where he has since been held.

Tigran Gambaryan

Photograph: Binance

The charges against Gambaryan are particularly ironic given his track record as a federal agent. Prior to being hired by Binance, which was widely seen as part of the exchange’s efforts to clean up its operations’ lax compliance and years of alleged money laundering documented in a $4.3 billion settlement with the US government last year, Gambaryan spent a decade leading many of the most significant crypto crime investigations in history. From 2014 to 2017 alone, for instance, Gambaryan identified two corrupt federal agents who had enriched themselves with cryptocurrency from the Silk Road dark-web drug market, helped to track down half a billion dollars worth of bitcoins stolen from early crypto exchange Mt. Gox, helped develop a secret crypto tracing method that located the server hosting the massive AlphaBay dark web crime market, and helped to achieve the takedown of the Welcome to Video crypto-funded child sexual abuse video network.

Gambaryan’s supporters point out that his work for the IRS led to the seizure of more than $4 billion dollars, including several of the largest monetary seizures in the history of US criminal justice. “He’s done so much good for this country throughout his career,” Gambaryan’s wife Yuki told WIRED in March. “I believe it’s his turn to get the same amount of support from his country.”

At 25, Metafilter Feels Like a Time Capsule From Another Internet

Jessamyn West used to describe Metafilter as a social network for non-friends, a description belied in part by the tight-knit camaraderie that emerges in an online group of only a few thousand people. West herself is an example: She met her partner on the site. She also describes the Metafilter cohort as “a community of old Web nerds.”

This month, the venerated site celebrates its 25th anniversary. It’s amazing it has lasted that long; it made it this far in great part thanks to West, who helped stabilize it after a near-death spiral. You could say it’s the site that time forgot—certainly I’d forgotten about it until I decided to mark its big birthday. Metafilter is a kind of digital Brigadoon; visiting it is like a form of time travel. To people who have been around a while, Metafilter seems to preserve in amber the spirit of what online used to be like. The feed is strictly chronological. It’s still text-only. Some members may be influential on Metafilter, but they don’t call themselves influencers, and they don’t sell personally branded cosmetics or garments. As founder Matt Haughey, who stepped down in 2017, says, “It’s a weird throwback thing—like a cockroach that survived.”

When Haughey started Metafilter in 1999, he envisioned a quick way for people to share cool stuff they saw in what was then a few dozen key blogs. “I never even thought about free-flowing conversations, but it quickly went there,” he says.

For about a year the community was tiny, maybe 100 visitors a day, but in 2000 it was featured in a popular blog called Cool Site of the Day, and 5,000 people checked it out. That helped Metafilter morph from a niche link-sharing site into a community where smart people also discussed what was cool on the internet. In the early aughts, Haughey felt too many people were joining, so he cut off new membership. (People could still view the conversation as an outsider.) For years, the only way you could get in was to email him and beg. Later, when he decided to charge a $5 fee, 4,000 people signed up on the first day. The fee also helped to weed out potential trolls. That, and fairly paid moderators, maintained civility on the site. More importantly, the community itself didn’t tolerate awful behavior.

One popular feature from early on was “Ask Metafilter,” where members seek advice and tips from the Metafilter hive mind. “When you’re pitching a question to 10,000 really smart nerds, chances are somebody has to be experienced in the thing you’re asking,” says Haughey. It became an invaluable repository of knowledge, not just to the community but those who stumbled on the answers through Google. Quora later launched with a similar idea, but with ambitions for a mega-footprint. That wasn’t Metafilter’s thing.

“I didn’t want to be Walmart,” says Haughey. “We’re just the neighborhood corner store.” At one point he consulted with a kid named Aaron Swartz, who had an idea for a site that would be like a social-media wiki for everything. Then Swartz joined the first Y Combinator batch and hooked up with some founders starting a company called Reddit, which was basically Metafilter with limitless ambition.

Haughey was OK with that. In the early 2010s, things were pretty cush. Metafilter’s core community was tight, and millions of tourists dropped in, drawn by Google search results. Haughey monetized them via Google ads and was able to drop his day job as a web designer, buy a house, and raise a family. But beginning in 2012, Google made a number of spam-fighting changes to its ranking algorithms, and Metafilter, for mysterious reasons, suffered collateral damage. Over the next couple of years, revenue plunged and Metafilter had to lay off some employees.

The EU Is Coming for X’s Paid Blue Checks

Paid-for blue checks on social media network X deceive users and are abused by malicious actors, the European Union said today, threatening the Elon Musk–owned platform with millions of dollars in fines unless the company makes changes.

Enabling any account to pay for a verification breaches the EU’s Digital Services Act (DSA), European Commission officials said on Friday, because it “negatively affects users’ ability to make free and informed decisions about the authenticity of the accounts.” X now has a chance to respond to the findings. If Musk cannot reach a resolution with the EU, the company faces fines of up to 6 percent of its global annual turnover.

Blue checks, which appear next to account names of X Premium subscribers, have been the subject of controversy since Musk acquired the platform in 2022. “Back in the day, blue checks used to mean trustworthy sources of information. Now with X, our preliminary view is that they deceive users and infringe the DSA,” EU internal market commissioner Thierry Breton said in a statement. “X has now the right of defense—but if our view is confirmed we will impose fines and require significant changes.”

X did not reply to WIRED’s request for comment.

Before Musk took over X, formerly known as Twitter, blue checks were used to verify the identity of influential accounts, ranging from the US Centers for Disease Control and Prevention to celebrity Kim Kardashian. Approved by Twitter staff, blue checks were also common among active researchers and journalists, signaling that they were reliable sources of information.

Supporters of that system argued it helped users identify trustworthy voices, while limiting scammers and impersonators. But Musk decried the arrangement as elitist and “corrupt to the core.” The ability to buy a blue tick for $8 per month was, he said, an antidote to “Twitter’s current lords & peasants” set-up. “Power to the people!” he posted, as he announced the new subscriber model.

Yet after a string of scandals—NBA star LeBron James was among high-profile figures targeted by impersonator accounts with paid-for blue checks—X introduced a more complicated color-coded system that Musk described as “painful, but necessary.” Verified companies can get gold checks, gray checks go to governments, and in April 2024 users considered “influential” had their blue checks restored for free.

Despite those changes, the EU said on Friday that X’s verification system does not correspond with industry practice. Officials also claimed X does not comply with local rules on advertising transparency and fails to give researchers adequate access to its public data, using methods such as scraping. The fees for access to X’s API—enterprise packages start at $42,000 per month—either dissuades researchers from carrying out projects or forces them to pay disproportionately high fees, the Commission said. “In our view, X doesn’t comply with the DSA in key transparency areas,” EU competition chief Margrethe Vestager said in a post on X, adding this was the first time a company had been charged with “preliminary findings” under the Digital Services Act.

The X reprimand is the latest in a flurry issued to big tech companies by the Commission, as European regulators leverage new rules designed to curb tech giants’ market power and improve the way they operate. The EU gave no deadline for X to respond to its findings.

In the past month, Apple, Microsoft, and Meta have all been accused of breaking EU rules. Meta and Apple must resolve their cases before March 2025 to avoid fines. Yesterday, Apple said it would make its Tap and Go wallet technology available to rivals in its latest concession to local regulator demands.

Google DeepMind’s Chatbot-Powered Robot Is Part of a Bigger Revolution

In a cluttered open-plan office in Mountain View, California, a tall and slender wheeled robot has been busy playing tour guide and informal office helper—thanks to a large language model upgrade, Google DeepMind revealed today. The robot uses the latest version of Google’s Gemini large language model to both parse commands and find its way around.

When told by a human “Find me somewhere to write,” for instance, the robot dutifully trundles off, leading the person to a pristine whiteboard located somewhere in the building.

Gemini’s ability to handle video and text—in addition to its capacity to ingest large amounts of information in the form of previously recorded video tours of the office—allows the “Google helper” robot to make sense of its environment and navigate correctly when given commands that require some commonsense reasoning. The robot combines Gemini with an algorithm that generates specific actions for the robot to take, such as turning, in response to commands and what it sees in front of it.

When Gemini was introduced in December, Demis Hassabis, CEO of Google DeepMind, told WIRED that its multimodal capabilities would likely unlock new robot abilities. He added that the company’s researchers were hard at work testing the robotic potential of the model.

In a new paper outlining the project, the researchers behind the work say that their robot proved to be up to 90 percent reliable at navigating, even when given tricky commands such as “Where did I leave my coaster?” DeepMind’s system “has significantly improved the naturalness of human-robot interaction, and greatly increased the robot usability,” the team writes.

Courtesy of Google DeepMind

Photograph: Muinat Abdul; Google DeepMind

The demo neatly illustrates the potential for large language models to reach into the physical world and do useful work. Gemini and other chatbots mostly operate within the confines of a web browser or app, although they are increasingly able to handle visual and auditory input, as both Google and OpenAI have demonstrated recently. In May, Hassabis showed off an upgraded version of Gemini capable of making sense of an office layout as seen through a smartphone camera.

Academic and industry research labs are racing to see how language models might be used to enhance robots’ abilities. The May program for the International Conference on Robotics and Automation, a popular event for robotics researchers, lists almost two dozen papers that involve use of vision language models.

Investors are pouring money into startups aiming to apply advances in AI to robotics. Several of the researchers involved with the Google project have since left the company to found a startup called Physical Intelligence, which received an initial $70 million in funding; it is working to combine large language models with real-world training to give robots general problem-solving abilities. Skild AI, founded by roboticists at Carnegie Mellon University, has a similar goal. This month it announced $300 million in funding.

Just a few years ago, a robot would need a map of its environment and carefully chosen commands to navigate successfully. Large language models contain useful information about the physical world, and newer versions that are trained on images and video as well as text, known as vision language models, can answer questions that require perception. Gemini allows Google’s robot to parse visual instructions as well as spoken ones, following a sketch on a whiteboard that shows a route to a new destination.

In their paper, the researchers say they plan to test the system on different kinds of robots. They add that Gemini should be able to make sense of more complex questions, such as “Do they have my favorite drink today?” from a user with a lot of empty Coke cans on their desk.

How Watermelon Cupcakes Kicked Off an Internal Storm at Meta

Williams in her note explained that “‘Prayers for …’ any location where there is a war in process might be taken down, but prayers for those impacted by a natural disaster, for example, might stay up.” She continued, “We know people may not agree with this approach, but it’s one of the trade-offs we made to ensure we maintain a productive place for everyone.”

Pain and Distress

Meanwhile, Arab and Muslim workers expressed disappointment that last month’s World Refugee Week commemorations inside Meta included talks about human rights projects and refugee experiences and lunches featuring Ukrainian and Syrian food but nothing mentioning Palestinians. (WIRED has viewed the internal schedule for the week.)

They were similarly dismayed that Meta’s Oversight Board, which advises on content policies, wrote in Hebrew, but not Arabic, to solicit public comments about the Palestinian human rights expression “from the river to the sea,” including whether it’s antisemitic. An Oversight Board spokesperson did not respond to a request for comment.

The workers also remain frustrated that Meta hasn’t met their demands from December to remove the Instagram accounts of anti-hate watchdog groups such as Canary Mission and StopAntisemitism that have been shaming Palestinian supporters in alleged violation of platform rules against bullying. Leaders of PWG met with Meta executives including Nick Clegg, the president of global affairs, who vowed to keep dialog with workers open. But the accounts remain up, and Canary Mission and StopAntisemitism each have added about 15,000 followers since demands were drafted.

Taking it as a sign of the uphill battle they face, the employees recently seized on a photograph on Instagram showing Nicola Mendelsohn, head of Meta’s Global Business Group, posing beside Liora Rez, founder and executive director of StopAntisemitism. Rez tells WIRED that her group does not hesitate calling individuals out for antisemitic views and alerting their employers, but declined further comment. Canary Mission says in an unsigned statement that “there needs to be accountability” for antisemitism.

The disputes over Meta’s response to Gaza discussions have had cascading effects. In May, Meta’s internal community team shut down some planned Memorial Day commemorations to honor military veterans at the company. An employee asked for explanation in an internal forum with over 11,000 members, drawing a reply from Meta’s chief technology officer, Andrew Bosworth, who wrote that polarizing discussions about “regions or territories that are unrecognized” had in part required revisiting planning and oversight of all sorts of activities.

While honoring veterans was “apolitical,” Bosworth wrote in the post seen by WIRED, the CEE rules needed to be applied consistently to survive under labor laws. “There are groups that are zealously looking for an excuse to undermine our company policies,” he wrote.

Some Arab and Muslim workers felt Bosworth’s comments alluded to them. “I don’t want to work anywhere that is actively discriminating against my community,” says one Meta worker who’s nearly ready to leave. “It makes me sick that I work for this company.”

Meta hasn’t let up on CEE enforcement in recent weeks. Workers remain barred from holding vigil internally. As a result, they planned to gather near the company’s New York and San Francisco offices this evening to recognize colleagues who have lost family in Gaza to the war, according to the Meta4employees Instagram account and two of the sources. They are curious to see how the company tries, if at all, to stop the memorial, which the public is invited to attend.

Ashraf Zeitoon, who was Facebook’s head of Middle East and North Africa policy from 2014 to 2017 and still mentors many Arab employees at Meta, says discontent among those workers has soared. He used to push long-timers to quit when they were frustrated; now he has to convince recent hires to stay long enough to give the company a chance to evolve.

“Unprecedented levels” of restrictions and enforcement have been “extremely painful and distressing for them,” Zeitoon says. It seems that the emotions Meta had wanted to avoid by keeping talk of war out of the workplace cannot be so easily suppressed.

Amazon Ramps Up Security to Head Off Project Nimbus Protests

Amazon appeared to have significantly heightened security for its New York Amazon Web Services Summit on Wednesday, two weeks after a number of activists disrupted the Washington, DC, AWS Summit in protest against Project Nimbus, Amazon and Google’s $1.2 billion cloud computing contract with the Israeli government. The clampdown in New York quelled several activists’ plans to interrupt the keynote speech from Matt Wood, the vice president for AI products at AWS.

Amazon allowed only approved individuals to attend the keynote speech. The activists, who had registered online to attend, all received emails ahead of the conference informing them that they would not be allowed into the keynote due to having too little space.

In addition, there was a heavy presence of private security guards and personnel from the New York Police Department and New York State Police at the conference. Despite being barred from the keynote, the activists did enter the building, where security confiscated posters and flyers during bag checks, which not all attendees were subjected to.

Amazon has previously said that it respects its “employees’ rights to express themselves without fear of retaliation, intimidation, or harassment,” referring to Project Nimbus protests. However, the heightened security shows that the company is taking steps in an attempt to thwart additional dissent. Google, for its part, fired 50 employees after a high-profile April protest over the company’s cloud-computing contract with the Israeli government.

The activists behind the planned keynote disruption are all organizers with No Tech for Apartheid (NOTA), a coalition of tech workers, organizers with the Muslim grassroots group MPower Change, and members of the anti-Zionist Jewish group Jewish Voices for Peace. (NOTA was created in 2021 shortly after news about Project Nimbus became public.) The group planned the Google sit-in protest and other recent actions targeting Project Nimbus.

Those intending to interrupt Wood’s keynote include Zelda Montes, a former YouTube software engineer, and Hasan Ibraheem, a former Google software engineer. Both were among the 50 Google employees fired in the spring. Jamie Kowalski, a former Amazon software employee who worked at the company for six years, Ferras Hamad, a former Meta employee who was recently fired after raising concerns about anti-Palestinian censorship, and one other tech worker, who did not publicly disclose their name, had also planned to protest.

Five other NOTA activists stood directly outside the AWS Summit, behind sets of barricades, and distributed informational flyers. They held large banners reading “Google and Amazon Workers Say: Drop Nimbus, End the Occupation, No Tech for Apartheid” and “Genocide Powered by AWS” atop an image of a Gazan neighborhood reduced to rubble.

Photograph: Caroline Haskins

AI Can’t Replace Teaching, but It Can Make It Better

Khanmigo doesn’t answer student questions directly, but starts with questions of its own, such as asking whether the student has any ideas about how to find an answer. Then it guides them to a solution, step by step, with hints and encouragement.

Notwithstanding Khan’s expansive vision of “amazing” personal tutors for every student on the planet, DiCerbo assigns Khanmigo a more limited teaching role. When students are working independently on a skill or concept but get hung up or caught in a cognitive rut, she says, “we want to help students get unstuck.”

Some 100,000 students and teachers piloted Khanmigo this past academic year in schools nationwide, helping to flag any hallucinations the bot has and providing tons of student-bot conversations for DiCerbo and her team to analyze.

“We look for things like summarizing, providing hints and encouraging,” she explains.

The degree to which Khanmigo has closed AI’s engagement gap is not yet known. Khan Academy plans to release some summary data on student-bot interactions later this summer, according to DiCerbo. Plans for third-party researchers to assess the tutor’s impact on learning will take longer.

AI Feedback Works Both Ways

Since 2021, the nonprofit Saga Education has also been experimenting with AI feedback to help tutors better engage and motivate students. Working with researchers from the University of Memphis and the University of Colorado, the Saga team pilot in 2023 fed transcripts of their math tutoring sessions into an AI model trained to recognize when the tutor was prompting students to explain their reasoning, refine their answers, or initiate a deeper discussion. The AI analyzed how often each tutor took these steps.

Tracking some 2,300 tutoring sessions over several weeks, they found that tutors whose coaches used the AI feedback peppered their sessions with significantly more of these prompts to encourage student engagement.

While Saga is looking into having AI deliver some feedback directly to tutors, it’s doing so cautiously because, according to Brent Milne, the vice president of product research and development at Saga Education, “having a human coach in the loop is really valuable to us.”

Experts expect that AI’s role in education will grow, and its interactions will continue to seem more and more human. Earlier this year, OpenAI and the startup Hume AI separately launched “emotionally intelligent” AI that analyzes tone of voice and facial expressions to infer a user’s mood and respond with calibrated “empathy.” Nevertheless, even emotionally intelligent AI will likely fall short on the student engagement front, according to Brown University computer science professor Michael Littman, who is also the National Science Foundation’s division director for information and intelligent systems.

No matter how humanlike the conversation, he says, students understand at a fundamental level that AI doesn’t really care about them, what they have to say in their writing, or whether they pass or fail subjects. In turn, students will never really care about the bot and what it thinks. A June study in the journal Learning and Instruction found that AI can already provide decent feedback on student essays. What is not clear is whether student writers will put in care and effort, rather than offload the task to a bot, if AI becomes the primary audience for their work.

“There’s incredible value in the human relationship component of learning,” Littman says, “and when you just take humans out of the equation, something is lost.”

This story about AI tutors was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Hechinger newsletter.

AI-Powered Super Soldiers Are More Than Just a Pipe Dream

The day is slowly turning into night, and the American special operators are growing concerned. They are deployed to a densely populated urban center in a politically volatile region, and local activity has grown increasingly frenetic in recent days, the roads and markets overflowing with more than the normal bustle of city life. Intelligence suggests the threat level in the city is high, but the specifics are vague, and the team needs to maintain a low profile—a firefight could bring known hostile elements down upon them. To assess potential threats, the Americans decide to take a more cautious approach. Eschewing conspicuous tactical gear in favor of blending in with potential crowds, an operator steps out into the neighborhood’s main thoroughfare to see what he can see.

With a click of a button, the operator sees … everything. A complex suite of sensors affixed to his head-up display start vacuuming up information from the world around him. Body language, heart rates, facial expressions, and even ambient snatches of conversation in local dialects are rapidly collected and routed through his backpack supercomputers for processing with the help of an onboard artificial intelligence engine. The information is instantly analyzed, streamlined, and regurgitated back into the head-up display. The assessment from the operators’ tactical AI sidekick comes back clear: There are a series of seasonal events coming into town, and most passersby are excited and exuberant, presenting a minimal threat to the team. Crisis averted—for now.

This is one of many potential scenarios repeatedly presented by Defense Department officials in recent years when discussing the future of US special operations forces, those elite troops tasked with facing the world’s most complex threats head-on as the “tip of the spear” of the US military. Both defense officials and science-fiction scribes may have envisioned a future of warfare shaped by brain implants and performing enhancing drugs, or a suit of powered armor straight out of Starship Troopers, but according to US Special Operations Command, the next generation of armed conflict will be fought (and, hopefully, won) with a relatively simple concept: the “hyper enabled operator.”

More Brains, Less Brawn

First introduced to the public in 2019 in an essay by officials from SOCOM’s Joint Acquisition Task Force (JATF) for Small Wars Journal, the hyper-enabled operator (HEO) concept is the successor program to the Tactical Assault Light Operator Suit (TALOS) effort that, initiated in 2013, sought to outfit US special operations forces with a so-called “Iron Man” suit. Inspired by the 2012 death of a Navy SEAL during a hostage rescue operation in Afghanistan, TALOS was intended to improve operators’ survivability in combat by making them virtually resistant to small-arms fire through additional layers of sophisticated armor, the latest installment of the Pentagon’s decades-long effort to build a powered exoskeleton for infantry troops. While the TALOS effort was declared dead in 2019 due to challenges integrating its disparate systems into one cohesive unit, the lessons learned from the program gave rise to the HEO as a natural successor.

The core objective of the HEO concept is straightforward: to give warfighters “cognitive overmatch” on the battlefield, or “the ability to dominate the situation by making informed decisions faster than the opponent,” as SOCOM officials put it. Rather than bestowing US special operations forces with physical advantages through next-generation body armor and exotic weaponry, the future operator will head into battle with technologies designed to boost their situational awareness and relevant decisionmaking to superior levels compared to the adversary. Former fighter pilot and Air Force colonel John Boyd proposed the “OODA loop” (observe, orient, decide, act) as the core military decisionmaking model of the 21st century; the HEO concept seeks to use technology to “tighten” that loop so far that operators are quite literally making smarter and faster decisions than the enemy.

“The goal of HEO,” as SOCOM officials put it in 2019, “is to get the right information to the right person at the right time.”

To achieve this goal, the HEO concept calls for swapping the powered armor at the heart of the TALOS effort for sophisticated communications equipment and a robust sensor suite built on advanced computing architecture, allowing the operator to vacuum up relevant data and distill it into actionable information through a simple interface like a head-up display—and do so “at the edge,” in places where traditional communications networks may not be available. If TALOS was envisioned as an “Iron Man” suit, as I previously observed, then HEO is essentially Jarvis, Tony Stark’s built-in AI assistant that’s constantly feeding him information through his helmet’s head-up display.