Twitter has backed off a plan to purge inactive accounts after a backlash from people pointing out that it would cull most dead people’s tweet archives.
The social media site had announced it would, on Dec. 11, deactivate all accounts that had not signed in at any point during the past six months. But on Wednesday, Twitter said it was suspending the plan until it could find a good way to memorialize dead people’s Twitter accounts.
Unlike Facebook, which lets dead users’ friends and family report them as deceased — and set up a special page for sharing memories about that person — the timelines of Twitter users who die are (at present) left unchanged.
“We’ve heard you on the impact that this would have on the accounts of the deceased. This was a miss on our part,” the company tweeted on Wednesday. “We will not be removing any inactive accounts until we create a new way for people to memorize accounts.”
We’ve heard your feedback about our effort to delete inactive accounts and want to respond and clarify. Here’s what’s happening:
Twitter’s policy of removing inactive accounts was initially planned in order to free up usernames. In many cases, lucrative usernames are held by accounts that have been inactive for years, causing frustration on the part of active users left with unwieldy Twitter brands.
As Wired notes, even the momentary threat of their prestige handles being compromised led some early users to restake their claim. After a 12 year-long silence, the person behind @flawless, for example, was back within minutes to challenge a would-be takeover:
A woman who was suspended by TikTok after posting viral videos critical of the Chinese government’s actions in Xinjiang said in a Twitter post that the Chinese video-sharing app has restored her account and apologized.
New Jersey teenager Feroza Aziz had posted a series of videos that initially looked like makeup tutorials, before quickly morphing into stinging rebukes of China’s treatment of Uighur Muslims. “So the first thing you need to do is grab your lash curler, curl your lashes, obviously, then you’re going to put them down and use your phone that you’re using right now to search up what’s happening in China,” she said in one.
“I thought if I made this sound like a makeup tutorial, people would want to watch it,” Aziz earlier told CNN. “When I spoke straightly about the Uighur Muslims, that video got taken down.”
TikTok, owned by Beijing-based ByteDance Inc., blamed a “human moderation error” for the removal of her viral video, noting in a lengthy statement that a previous account belonging to Aziz was removed for posting a video including an image of Osama bin Laden, which violated their guidelines. The company says Aziz’s video doesn’t violate its standards, shouldn’t have been removed, and was only offline for 50 minutes total. TikTok says it is conducting a broader review of its content moderation process.
U.S. lawmakers have expressed concern that the app’s growing popularity poses a national security risk, including censorship by the Chinese government. The U.S. has leveled similar claims of potential censorship against Chinese tech companies like Huawei Technologies Co., while sanctioning others like security camera maker Hangzhou Hikvision Digital Technology Co. Ltd. for their involvement in Xinjiang.
UPDATE: tik tok has issued a public apology and gave me my account back. Do I believe they took it away because of a unrelated satirical video that was deleted on a previous deleted account of mine? Right after I finished posting a 3 part video about the Uyghurs? No. pic.twitter.com/ehUpSJiyy1
The incident is the latest flare-up for companies that have to navigate political sensitivities in China as well as government and consumer backlash in the U.S. and elsewhere to actions seen as caving to China’s political ambitions.
Chinese state television in October dropped all National Basketball Association coverage after a team official’s tweet in support of Hong Kong pro-democracy protesters, as well as almost all Chinese sponsors cutting ties with the league. Meanwhile, a Dreamworks Animation children’s movie was banned in neighboring Vietnam because it contained a map of the South China Sea reflecting China’s expansive and widely disputed claims.
–– With assistance from Melissa Cheok and Jihye Lee.
On Nov. 13, Facebook announced with great fanfare that it was taking down substantially more posts containing hate speech from its platform than ever before.
Facebook removed more than seven million instances of hate speech in the third quarter of 2019, the company claimed, an increase of 59% against the previous quarter. More and more of that hate speech (80%) is now being detected not by humans, they added, but automatically, by artificial intelligence.
The new statistics, however, conceal a structural problem Facebook is yet to overcome: not all hate speech is treated equally.
Emily Barone and Lon Tweeten/TIMEOf all the hate speech acted on by Facebook, 80% is now flagged first by algorithms
The algorithms Facebook currently uses to remove hate speech only work in certain languages. That means it has become easier for Facebook to contain the spread of racial or religious hatred online in the primarily developed countries and communities where global languages like English, Spanish and Mandarin dominate.
But in the rest of the world, it’s as difficult as ever.
Facebook tells TIME it has functional hate speech detection algorithms (or “classifiers,” as it calls them internally) in more than 40 languages worldwide. In the rest of the world’s languages, Facebook relies on its own users and human moderators to police hate speech.
Unlike the algorithms that Facebook says now automatically detect 80% of hateful posts without needing a user to have reported them first, these human moderators do not regularly scan the site for hate speech themselves. Instead, their job is to decide whether posts that users have already reported should be removed.
Languages spoken by minorities are the hardest-hit by this disparity. It means that racial slurs, incitements to violence and targeted abuse can spread faster in the developing world than they do at present in the U.S., Europe and elsewhere.
India, the second-most populous country in the world with more than 1.2 billion people and nearly 800 languages, offers an insight into this problem.
Facebook declined to share a full list of languages in which it has working hate speech detection algorithms. But the company tells TIME that out of the 22 official languages of India, only four — Hindi, Bengali, Urdu and Tamil — are covered by Facebook’s algorithms. Some 25% of India’s population do not speak at least one of those four languages or English, and about 38% don’t speak one as their first language, according to a TIME analysis of the 2011 Indian census.
Emily Barone and Lon Tweeten/TIMEFacebook has algorithms to detect hate speech in only four of India’s 22 official (or “scheduled”) languages.
In the state of Assam, in northeastern India, this gap in Facebook’s systems has allowed for violent extremism to flourish — unchecked by regulators and accelerated by the power Facebook gives anybody to share text, images and video widely.
In Assam, the global advocacy group Avaaz has identified an ongoing campaign of hate by the Assamese-speaking, largely Hindu, majority against the Bengali-speaking, largely Muslim, minority. In a report published in October, Avaaz detailed Facebook posts calling Bengali Muslims “parasites,” “rats” and “rapists,” and calling for Hindu girls to be poisoned to stop Muslims from raping them. The posts were viewed at least 5.4 million times. The U.N. has called the situation there a “potential humanitarian crisis.”
Facebook confirmed to TIME that it does not have an algorithm for detecting hate speech in Assamese, the main language spoken in Assam. Instead of automatically detecting hate speech in Assamese, Facebook employs an unspecified number of human moderators around the clock who speak the language. But those moderators, for the most part, only respond to posts flagged by users.
Campaigners say Facebook’s reliance on user reports of hate speech in languages where it does not have working algorithms puts too much of a burden on these victims of hate speech, who are often not highly educated and already from marginalized communities. “In the Assamese context, the minorities most directly targeted by hate speech on Facebook often lack online access or the understanding of how to navigate Facebook’s flagging tools. No one else is reporting it for them either,” the Avaaz report says. “This leaves Facebook with a huge blindspot,” Alaphia Zoyab, a senior campaigner at Avaaz, tells TIME.
The solution, Zoyab says, isn’t less human involvement, it’s more: more Facebook employees doing proactive searches for hate speech, and a concerted effort to build a dataset of Assamese hate speech. “Unless Facebook chooses to become smarter about understanding the societies in which it operates, and ensures it puts human beings on the case to proactively ‘sweep’ the platform for violent content, in some of these smaller languages we’re going to continue in this digital dystopia of dangerous hate,” she tells TIME.
Technical issues
Facebook says the reason it can’t automatically detect hate speech in Assamese — and other small languages — is because it doesn’t have a large enough dataset to train the artificial intelligence program that would do so.
In a process called machine learning, Facebook trains its computers to grade posts on a spectrum of hatefulness by giving them tens or hundreds of thousands of examples of hate speech. In English, which has 1.5 billion speakers, that’s easy enough. But in smaller languages like Assamese, which has only 23.6 million speakers according to the 2011 Indian census, that becomes harder. Add the fact that not many hateful posts in Assamese are flagged as hate speech in the first place, and it becomes very difficult to train a program to detect hatred in Assamese.
But campaigners say this doesn’t make the current situation in Assam inevitable. When hate speech against the Rohingya minority in Myanmar spread virulently via Facebook in Burmese (a language spoken by some 42 million people) Facebook was slow to act because it had no hate-speech detection algorithm in Burmese, and few Burmese-speaking moderators. But since the Rohingya genocide, Facebook has built a hate-speech classifier in Burmese by pouring resources toward the project. It paid to hire 100 Burmese-speaking content moderators, who manually built up a dataset of Burmese hate speech that was used to train an algorithm.
Facebook declined to say how many Assamese-speaking moderators it employs, after multiple requests from TIME. In a statement, Facebook said: “We don’t break down the number of content reviewers by language, in large part because the number alone isn’t representative of the people working on any given language or issue and the number changes based on staffing needs. We base our staffing on a number of different factors, including the geopolitical situation on the ground and the volume of content posted in a specific language.”
Facebook tells TIME that it has a list of countries in which it has prioritized its work preventing what it calls “offline harms,” which it defines as real-world physical violence. Myanmar, Sri Lanka, India, Libya, Ethiopia, Syria, Cameroon, the Democratic Republic of Congo and Venezuela are on that list, a spokesperson said.
The company also revealed to TIME some more of the 40-plus languages in which it has working hate speech detection algorithms. They include Mandarin and Arabic, and the two official languages of Sri Lanka: Sinhalese and Tamil. The company is currently building a hate speech classifier in Punjabi — another official Indian language that has more than 125 million speakers around the world.
Facebook has also declined to disclose the success rate of individual language algorithms. So, while globally Facebook’s algorithms now detect 80% of hate speech before it’s reported by a user, it is impossible to tell whether this is an average that masks lower success rates in some languages compared to others.
Two Facebook officials — one engineer who works on hate speech algorithms, and one member of Facebook’s “strategic response team” — told TIME that Facebook was building classifiers in several new languages, but did not want to set them loose on the site until they were more accurate, in order to avoid taking down posts that aren’t hateful.
But even when their algorithms flag hate speech content, Facebook says, human moderators always make the final decision on whether to remove it. Facebook says its moderators typically respond to reports within 24 hours. During that time, posts flagged as hateful remain online.
In “more than 50” languages, Facebook says, it has moderators working 24 hours a day, seven days a week. But there is “significant overlap” between those 50-plus languages and the 40-plus languages in which an algorithm is currently active, Facebook says. In still more languages, Facebook employs part-time moderators.
Because Facebook does not break down the number of content moderators by language, it is also hard to tell if there are also discrepancies between languages when it comes to how quickly and efficiently hateful posts are removed. According to Avaaz, minority languages are overlooked when it comes to the speed of moderation, too. When Avaaz reported 213 of the “clearest examples” of hateful posts in Assamese to Facebook, moderators eventually removed 96 within 24 hours. But the other 117 examples of “brazen hate speech” remain on the site, according to Zoyab.
Other observers question the validity of building an automated system that, they fear, will eventually outsource to machines the decisions on what kind of speech is acceptable. “Free speech implications should be extremely worrisome to everybody, since we do not know what Facebook is leaving up and taking down,” says Susan Benesch, the executive director of the Dangerous Speech Project, a non-profit that studies how public speech can cause real-world violence. “They are taking down millions of pieces of content every day, and you do not know where they are drawing the line.”
“Preventing it from being posted in the first place,” Benesch says, “would be much, much more effective.”
— Additional reporting by Emily Barone and Lon Tweeten/New York
Take a deep breath: Black Friday, Nov. 29, is nearly here. (With Cyber Monday following shortly afterwards on Dec. 2.) And, because there are just over three short weeks between Thanksgiving and Christmas, even those who are traditionally tardy in putting their holiday gift wish lists together will have likely come up with a few ideas in time for the retail — and e-tail — extravaganza.
To help shoppers maximize out their purchasing power without, well, maxing out their credit cards, many big box retailers are rolling out their Black Friday and Cyber Monday deals early. Some of these special offers are available now; others will go live on Black Friday and through the weekend. So if you’re hoping to load up on iPads, Air Pods, Pixel 4s, and Fitbits: now(-ish) is the time.
Here are some of the best deals from Target, Amazon, Walmart, Best Buy and more, set to tempt tech-savvy customers with discounts on everything from gaming systems to smart home gadgets to new laptops, smartwatches, and tablets.
Amazon
Amazon Echo
Amazon Echo (second generation) for $59.99 (Save $40)
If you want to nab something better than your average smart speaker, the Amazon Echo Show 5 might well fit the bill. The mini smart display can show you anything from live TV and song lyrics to the view from your front door’s smart camera.
Improving on AirPods is no easy feat, but Apple’s done it with the new in-ear, noise-cancelling AirPods Pro. The truly wireless earbuds — wireless charging case included — still fit in your pocket easily, and new features like transparency mode keep you aware of your environment by mixing the noise around you with whatever you’re listening to.
Samsung’s stellar Galaxy Note 10+ is a workhorse of a smartphone with an impressive camera to match. An upgraded S-Pen with gesture controls means you can use that stylus for more than just note-taking or snapping a picture, too.
If you’re relying on your TV’s terrible speakers, it’s time to make a change. Sonos’ Beam is a soundbar with some built-in smarts, and supports voice assistants like Amazon Alexa and smart speaker standards like AirPlay 2.
Want to add some smarts to your dumb TV? Roku’s Streaming Stick+ puts the Roku experience at your fingertips without adding another eyesore to your living room. Stream live TV, use apps like Netflix and Disney+, and put that 4K TV to good use.
Google’s Nest Hub smart display puts your digital life on its 7-inch screen, and lets you chat with Google Assistant to control the lights, watch some YouTube videos, or call your mom.
World Wide Web inventor Tim Berners-Lee released an ambitious rule book for online governance — a bill of rights and obligations for the internet — designed to counteract the growing prevalence of such anti-democratic poisons as misinformation, mass surveillance and censorship.
The product of a year’s work by the World Wide Web Foundation where Berners-Lee is a founding director, the “ Contract for the Web ” seeks commitments from governments and industry to make and keep knowledge freely available — a digital policy agenda true to the design vision of the 30-year-old web.
The contract is non-binding, however. And funders and partners in the endeavor include Google and Facebook, whose data-collecting business models and sensation-rewarding algorithms have been blamed for exacerbating online toxicity.
“We haven’t had a fairly complex, fairly complete plan of action for the web going forward,” Berners-Lee said in an interview. “This is the first time we’ve had a rule book in which responsibility is being shared.”
For instance, the contract proposes a framework for protecting online privacy and personal data with clearly defined national laws that give individuals greater control over the data collected about them. Independent, well-resourced regulators would offer the public effective means for redress. Current laws and institutions don’t measure up to that standard.
Amnesty International just released a report charging that Google and Facebook’s business models are predicated on the abuse of human rights.
Berners-Lee nevertheless says that “having them in the room is really important.” He said both companies had approached the foundation seeking participation.
“We feel that companies and governments deserve equal seats at the table and understanding where they’re coming from is equally valuable,” he said. “To have this conversation around a table without the tech companies, it just wouldn’t have the clout and we wouldn’t have ended up with the insights.”
The nonprofit foundation’s top donors include the Swedish, Canadian and U.S. governments and the Ford and Omidyar foundations.
One of its biggest challenges is the growing balkanization of the internet, with national governments led by China, Russia and Iran exerting increasing technical control over their domestic networks, tightening censorship and surveillance.
“The trend for balkanization is really worrying and it’s extreme at the moment in Iran,” said Berners-Lee. A strong government exhibits tolerance, the computer scientist added, for “other voices, opposition voices, foreign voices to be heard by its citizens.”
So how to prevent governments from restricting internet access at their borders?
One approach, said Berners-Lee, could be financial pressure. Multinational lenders could condition lower interest rates, for example, on a nation’s willingness to let information flow freely on its domestic network.
Elon Musk’s big reveal of his new Tesla “Cybertruck” cracked up Thursday night when he tried to demonstrate the vehicle’s shatterproof “armor glass.”
Broadcasting on a livestream, Musk talked about how strong the new Tesla was and was joined on stage by Franz von Holzhuasen, Tesla’s chief designer, wielding a metal ball.
Then Musk asked von Holzhuasen a question that may haunt him for some time: “Can you try and break this glass, please?”
With that, von Holzhausen threw a metal ball at the futuristic vehicle’s front window and smashed it.
“Oh my f—ing God,” Musk said. “Maybe that was a little hard.” In a show of confidence, von Holzhausen suggested throwing the metal ball at the rear window. “Try that one? Really?” asked Musk, moments before the window smashed.
“It didn’t go through, that’s the plus side,” said Musk.
The Cybertruck, described by some as Knight Rider, was unveiled in Hawthorne, Calif. Fans cheered as the truck was paraded onto a smoked-filled stage to the sound of drums.
Tesla’s latest invention is inspired by the part submarine, part Lotus Espirit sports car featured in the James Bond film The Spy Who Loved Me. It will cost $39,900 and production is expected to begin in late 2021, according to Tesla.
Musk hasn’t had the best of luck with his inventions this week. Musk’s SpaceX Starship prototype exploded during a pressurization test on Wednesday. The explosion destroyed the upper part of the rocket as it was sent high into the air. The Starship is intended to carry passengers and cargo to the Moon and eventually Mars.
Every year, TIME highlights the Best Inventions that are making the world better, smarter and even a bit more fun. (See last year’s list here.)
To assemble our 2019 list, we solicited nominations across a variety of categories from our editors and correspondents around the world, as well as through an online application process. Then TIME evaluated each contender based on key factors, including originality, creativity, influence, ambition and effectiveness.
The result: 100 groundbreaking inventions that are changing the way we live, work, play and think about what’s possible.
(SAN FRANCISCO) — Google is making it harder for political advertisers to target specific types of people.
The company said that as of January, advertisers will only be able to target U.S. political ads based on broad categories such as gender, age and postal code. Currently, ads can be tailored for more specific groups — for instance, using information gleaned from public voter logs, such as political affiliation.
The change will take effect in the UK in the next week, before the general election, and in the European Union before the end of 2019. It will apply everywhere else in early January.
Google reiterated that ads making false claims are prohibited, adding that so-called deepfakes — realistic but false video clips — are not allowed. Neither are “demonstrably false” claims that could affect voter trust in an election.
But in a blog post announcing the news, Google Ads vice president Scott Spencer noted that political dialogue is important and “no one can sensibly adjudicate every political claim, counterclaim and insinuation.”
“So we expect that the number of political ads on which we take action will be very limited — but we will continue to do so for clear violations,” he wrote.
Like all Google ads, political advertisers can also use the broader practice of “contextual targeting,” which involves placing ads about, say, climate change on articles about the environment.
The company is also requiring advertiser verification for a broader range of political messages. Previously, only ads mentioning candidates or officeholders for federal positions required verification. Now that will also include ads touching on state officials and candidates as well as ballot measures.
The move follows Twitter’s ban on political ads, which goes into effect on Friday.
Twitter also placed restrictions on ads related to social causes such as climate change or abortion.
In these instances, advertisers won’t be able to target those ads down to a user’s ZIP code or use political categories such as “conservative” or “liberal.” Rather, targeting must be kept broad, based on a user’s state or province, for instance.
Facebook has not made sweeping changes to any of its ads policies, but thrust the issue into public discussion this fall when it confirmed it would not remove false or misleading ads by politicians.
Critics have harshly condemned Facebook’s decision. Twitter also faced a backlash from those who found its ban too far-reaching.
Google has taken a more middling stance, but it’s unlikely to please everyone. Earlier Wednesday, President Donald Trump’s campaign staff took issue with reports that Facebook might consider limiting its targeting practices.
“Facebook wants to take important tools away from us for 2020,” the campaign tweeted from its official account. “Tools that help us reach more great Americans & lift voices the media & big tech choose to ignore!”
Even Google’s limited targeting could receive backlash.
Critics and civil rights groups have said targeting specific zip codes or other small geographic zones can allow advertisers to discriminate or sway elections.
The expansion to Google’s verification process will take effect December 3.
A security question from JetBlue is asking parents to do something they’d likely never do in public, or even on the semi-privacy of their browsing session: share the name of their favorite child.
In a screenshot posted on Twitter Sunday, a user shared part of the account sign-up process on JetBlue’s website — specifically, the security questions page. And one question in particular, which point blank asks, “what is the name of your favorite child?”
“JetBlue savage for this,” the user wrote. (The question is technically “optional,” per the screenshot, but still.)
Naturally, the first response was a GIF of “baby Yoda” from the Disney+ Star Wars series The Mandalorian, a character that has recently become the internet’s favorite child. Another user suggested listing the name of her rescue dog instead of her childrens’.
While most parents would likely claim they love all their children equally, at leastone studyhas shown that parents favor, or at least show preferential treatment toward one child. But naming a ‘favorite’ child could help parents in terms of security — a hacker trying to get into your account is unlikely to be clued into unspoken family dynamics. According to Infosec Institute, which provides training to IT and security professionals, choosing to answer less obvious security questions makes for a stronger account.
On the flipside, if your Internet presence makes it clear you only have two or three children, it’s a not-too-difficult guessing game to play out. And — in perhaps a worst-case scenario — it could also lead to some very awkward questions on a family vacation if you’re only offered an upgrade for you and the child in question.
Five years from now, the U.S.’ already overburdened mental health system may be short as many as 15,600 psychiatrists as the growth in demand for their services outpaces supply, according to a 2017 report from the National Council for Behavioral Health. But some proponents say that, by then, an unlikely tool—artificial intelligence—may be ready to help mental health practitioners mitigate the impact of the deficit.
Medicine is already a fruitful area for artificial intelligence; it has shown promise in diagnosing disease, interpreting images and zeroing in on treatment plans. Though psychiatry is in many ways a uniquely human field, requiring emotional intelligence and perception that computers can’t simulate, even here, experts say, AI could have an impact. The field, they argue, could benefit from artificial intelligence’s ability to analyze data and pick up on patterns and warning signs so subtle humans might never notice them.
“Clinicians actually get very little time to interact with patients,” says Peter Foltz, a research professor at the University of Colorado Boulder who this month published a paper about AI’s promise in psychiatry. “Patients tend to be remote, it’s very hard to get appointments and oftentimes they may be seen by a clinician [only] once every three months or six months.”
AI could be an effective way for clinicians to both make the best of the time they do have with patients, and bridge any gaps in access, Foltz says. AI-aided data analysis could help clinicians make diagnoses more quickly and accurately, getting patients on the right course of treatment faster—but perhaps more excitingly, Foltz says, apps or other programs that incorporate AI could allow clinicians to monitor their patients remotely, alerting them to issues or changes that arise between appointments and helping them incorporate that knowledge into treatment plans. That information could be lifesaving, since research has shown that regularly checking in with patients who are suicidal or in mental distress can keep them safe.
Some mental-health apps and programs already incorporate AI—like Woebot, an app-based mood tracker and chatbot that combines AI and principles from cognitive behavioral therapy—but it’ll probably be some five to 10 years before algorithms are routinely used in clinics, according to psychiatrists interviewed by TIME. Even then, Dr. John Torous, director of digital psychiatry at Beth Israel Deaconess Medical Center in Boston, cautions that “artificial intelligence is only as strong as the data it’s trained on,” and, he says, mental health diagnostics have not been quantified well enough to program an algorithm. It’s possible that will happen in the future, with more and larger psychological studies, but, Torous says “it’s going to be an uphill challenge.”
Not everyone shares that position. Speech and language have emerged as two of the clearest applications for AI in psychiatry, says Dr. Henry Nasrallah, a psychiatrist at the University of Cincinnati Medical Center who has written about AI’s place in the field. Speech and mental health are closely linked, he explains. Talking in a monotone can be a sign of depression; fast speech can point to mania; and disjointed word choice can be connected to schizophrenia. When these traits are pronounced enough, a human clinician might pick up on them—but AI algorithms, Nasrallah says, could be trained to flag signals and patterns too subtle for humans to detect.
Foltz and his team in Boulder are working in this space, as are big-name companies like IBM. Foltz and his colleagues designed a mobile app that takes patients through a series of repeatable verbal exercises, like telling a story and answering questions about their emotional state. An AI system then assesses those soundbites for signs of mental distress, both by analyzing how they compare to the individual’s previous responses, and by measuring the clips against responses from a larger patient population. The team tested the system on 225 people living in either Northern Norway or rural Louisiana—two places with inadequate access to mental health care—and found that the app was at least as accurate as clinicians at picking up on speech-based signs of mental distress.
Written language is also a promising area for AI-assisted mental health care, Nasrallah says. Studies have shown that machine learning algorithms trained to assess word choice and order are better than clinicians at distinguishing between real and fake suicide notes, meaning they’re good at picking up on signs of distress. Using these systems to regularly monitor a patient’s writing, perhaps through an app or periodic remote check-in with mental health professionals, could feasibly offer a way to assess their risk of self-harm.
Wearable devices offer further opportunities. Many people already use wearables to track their sleep and physical activity, both of which are closely related to mental well-being, Nasrallah says; using artificial intelligence to analyze those behaviors could lead to valuable insights for clinicians.
Even if these applications do pan out, Torous cautions that “nothing has ever been a panacea.” On one hand, he says, it’s exciting that technology is being pitched as a solution to problems that have long plagued the mental health field; but, on the other hand, “in some ways there’s so much desperation to make improvements to mental health that perhaps the tools are getting overvalued.”
Nasrallah and Foltz emphasize that AI isn’t meant to replace human psychiatrists or completely reinvent the wheel. (“Our brain is a better computer than any AI,” Nasrallah says.) Instead, they say, it can provide data and insights that will streamline treatment.
Alastair Denniston, an ophthalmologist and honorary professor at the U.K.’s University of Birmingham who this year published a research review about AI’s ability to diagnose disease, argues that, if anything, technology can help doctors focus on the human elements of medicine, rather than getting bogged down in the minutiae of diagnosis and data collection.
Artificial intelligence “may allow us to have more time in our day to spend actually communicating effectively and being more human,” Denniston says. “Rather than being diagnostic machines… [doctors can] provide some of that empathy that can get swallowed up by the business of what we do.”
The banana has been the subject of Andy Warhol’s cover art for the Velvet Underground’s debut album, can arguably be the most devastating item in the Mario Kart video game franchise and isone of the world’s most consumed fruits. And humanity’s love of bananas may still be on the rise, according to data from the Food and Agriculture Organization of the United Nations. On average, says Chris Barrett, a professor of agriculture at Cornell University, citing that U.N. data, every person on earth chows down on 130 bananas a year, at a rate of nearly three a week.
But the banana as we know it may also be on the verge of extinction. The situation led Colombia—where the economy relies heavily on the crop, as it does in several other countries including Ecuador, Costa Rica and Guatemala—to declare a national state of emergency in August. Banana experts around the world have raised concerns that it may be too late to reverse the damage.
The reason for the problem comes down to a single disease, but it also has far-reaching implications—and the world is watching. Even if the world’s relationship to bananas may never be the same, the lessons of the fruit can still save us from damage that could hit far beyond the produce aisle.
“The story of the banana is really the story of modern agriculture exemplified in a single fruit,” saysDanielBebber, who leads theBananEx research groupat the University of Exeter. “It has all of the ingredients of equitability and sustainability issues, disease pressure, and climate change impact all in one. It’s a very good lesson for us.”
Ninety-nine percentof exported bananas are a variety called the Cavendish—the attractive, golden-yellow fruit seen in the supermarket today.
But that wasn’t always the case. There are many varieties of banana in the world, and until the later half of the 19th century, the dominant one was called the Gros Michel. It was widely considered tastier than the Cavendish, and more difficult to bruise. But in the 1950s, the crop was swept by a strain of Panama disease, also known as banana wilt, brought on by the spread of a noxious, soil-inhabiting fungus.Desperate for a solution, the world’s banana farmers turned to the Cavendish. The Cavendish was resistant to the disease and fit other market needs: it could stay green for several weeks after being harvested (ideal for shipments to Europe), it had a high yield rate and it looked good in stores. Plus, multinational fruit companies had no other disease-resistant variety available that could be ready quickly for mass exportation.
The switched worked. As the Gros Michel was ravaged by disease, the Cavendish banana took over the world’s markets and kitchens. In fact, the entire banana supply chain is now set up to suit the very specific needs of that variety.
To the people who pay attention to such things, it wasn’t long before a case of banana déjà vu set in: the Cavendish had supplanted the Gros Michel, but—even though it had initially been selected for being disease-resistant—it was still at risk. Almost a decade ago, Dan Koeppel, author ofBanana: The Fate of the Fruit That Changed the World, warned in anNPR interviewthat Panama Disease would return to the world’s largest banana exporters, and this time with a strain that would hit the Cavendish hard. “[Every] single banana scientist I spoke to—and that was quite a few—says it’s not an ‘if,’ it’s a ‘when,’ and 10 to 30 years,” he said. “It only takes a single clump of contaminated dirt, literally, to get this thing rampaging across entire continents.”
Sure enough, the confirmation of the presence of Tropical Race 4 (TR4), another strain of Panama disease, on banana farms in Colombia, prompted this summer’s declaration of emergency there.
“The situation is very urgent,” says Bebber.
There are any number of ways the problem can spread. When it comes to bananas, everything from truck tires to workers’ boots can be disease carriers. But the bigger problem is how hard it is to stop. Because banana farmers are overwhelmingly growing the same exact crop—the Cavendish—they were all vulnerable to the same diseases.
“A lot of people would agree that we need to move to a more diverse, more sustainable system for bananas and agriculture in general,” says Bebber, “where we don’t put all our hope into a single, genetically identical crop.”
There’s a name for this situation:monoculture, the practice of fostering just one variety of something. Monoculture has its benefits. The entire system is standard, so there’s rarely new production and maintenance processes, and everything is compatible and familiar to users. On the other hand, as banana farmers learned, in a monoculture, all instances are prone to the same set of attacks. If someone or something figures out how to affect just one, the entire system is put at risk.
And as the banana industry has begun to battle the effects of monoculture, someone else has taken notice: the tech world.
The parallel was noticed as early as the late 1990s. Stephanie Forrest, one of the early researchers in this area, commonly cites the banana problem in lectures explaining the importance of diversity in computer systems. Forrest argues that some of the most notorious software attacks in history are comparable to Panama disease’s threat to the Cavendish; uniform software systems lead to uniform vulnerabilities. For example, the 1988 Morris Worm infected an estimated 10% of all computers connected to the Internet within just 24 hours, and, more recently, the 2016 Mirai Botnet, which allowed an outside party to remotely control a network of internet-connected devices, brought down Twitter, Netflix, CNN and more.
“Monocultures are dangerous in almost every facet of life,” echoes Fred B. Schneider, a cybersecurity expert at Cornell University. “With people, of course, populations are stronger and more disease-resistant if there’s more genetic diversity. And with transportation, it’s more effective to have several different options—when a train line is shut down, if you have other choices at your disposal, like a car or another form of transit, you won’t be stuck.”
Schneider points out that software monocultures are common because, without them, using your computer would be a lot harder. Default configuration settings, for example, are the norm to help users who may not be experts in the technology they’re using. Defaults like that can protect people from some problems, but also lead to others, as all the systems using the same default are vulnerable to the same problems.Knowledge of the problem, thanks to understanding of the issues facing crops like bananas, have led technologists to take steps to introduce artificial diversity into their systems. “To make a system artificially diverse, you just rearrange its guts in ways where the differences do not affect functionality in a material way,” Schneider says. Microsoft implemented one of the first large-scale commercial developments of artificial diversity in their Windows OS system, by randomizing the internal locations where important pieces of system data were stored.
For bananas, addressing the problems caused by monoculture may be harder, as market standards and supply chains make it very expensive for fruit companies to cultivate multiple varieties.
Jan Sochor—LatinContent via Getty ImagesA Colombian worker carries crude bananas to a transport car at a banana plantation. (Photo by Jan Sochor/Latincontent/Getty Images)
Existing disease-resistant varieties haven’t made inroads on the international market, but The Honduras Foundation for Agricultural Research (FHIA) has spent more than three years working on developing a disease-resistant variety that is as close as possible to the Cavendish, so that the world’s banana infrastructure doesn’t have to be reshaped from scratch. Still, that’s a process that can take 15 to 20 years, Bebber estimates.
Genetic engineering can lead to the development of new varieties at much faster rates than traditional breeding methods, but it can also turn consumers off. However, it has been the answer to similar problems in the past—for example, when the papaya ringspot virus threatened the papaya supply in the 1990s, “the major supply shock was averted through the development of a transgenic ringspot virus-resistant papaya,” explains Cornell’s Barrett. He believes that consumers’ fears might ease if it becomes one of the only viable answers to the issues created by monoculture production. The UK-based biotech company Tropic Biosciences has received $10 million in funding to use gene-editing technology to research solutions to widespread issues with tropical crops, focusing specially on disease resistance in bananas.
And while even the most Cavendish-like of FHIA’s disease-resistant varieties, a banana known as the FHIA-18, hasn’t yet met the standards of multinational buyers, that may change, according to Adolfo Martinez, director general of FHIA. “It’s still not close enough to the Cavendish,” he says, but he thinks the crisis may convince them. “Maybe now, companies will be more interested in it.”
So, what’s next for the banana? Will it simply disappear from our diets, album covers and video games? Bebber says the banana may change, but hopes are high that it won’t completely vanish. “Science,” he says, “will find a way.” Meanwhile, tech researchers are watching—hoping they can once again learn a lesson from biology, learning how to prevent a crisis before everything goes bananas.
(NEW YORK) — Disney’s new streaming service has added a disclaimer to Dumbo, Peter Pan and other classics because they depict racist stereotypes, underscoring a challenge media companies face when they resurrect older movies in modern times.
The move comes as Disney+ seems to be an instant hit. It attracted 10 million subscribers in just one day. The disclaimer reads, “This program is presented as originally created. It may contain outdated cultural depictions.”
Companies have been grappling for years with how to address stereotypes that were in TV shows and movies decades ago but look jarring today. Streaming brings the problem to the fore.
In Dumbo, from 1941, crows that help Dumbo learn to fly are depicted with exaggerated black stereotypical voices. The lead crow’s name is “Jim Crow,” a term that describes a set of laws that legalized segregation. In Peter Pan, from 1953, Native American characters are caricatured. Other Disney movies with the disclaimer include The Jungle Book and Swiss Family Robinson.
Pocahontas and Aladdin do not have it, despite rumblings by some that those films contain stereotypes, too.
On personal computers, the disclaimer appears as part of the text description of shows and movies underneath the video player. It’s less prominent on a cellphone’s smaller screen. Viewers are instructed to tap on a “details” tab for an “advisory.”
Disney’s disclaimer echoes what other media companies have done in response to problematic videos, but many people are calling on Disney to do more.
The company “needs to follow through in making a more robust statement that this was wrong, and these depictions were wrong,” said Psyche Williams-Forson, chairwoman of American studies at the University of Maryland at College Park. “Yes, we’re at a different time, but we’re also not at a different time.”
She said it is important that the images are shown rather than deleted, because viewers should be encouraged to talk with their children and others about the videos and their part in our cultural history.
Disney’s disclaimer is a good way to begin discussion about the larger issue of racism that is embedded in our cultural history, said Gayle Wald, American studies chairwoman at George Washington University. “Our cultural patrimony in the end is deeply tethered to our histories of racism, our histories of colonialism and our histories of sexism, so in that sense it helps to open up questions,” she said.
Wald said Disney is “the most culturally iconic and well-known purveyor of this sort of narrative and imagery,” but it’s by no means alone.
Universal Pictures’ teen comedy Sixteen Candles has long been decried for stereotyping Asians with its “Long Duk Dong” character.
Warner Bros. faced a similar problem with its “Tom and Jerry” cartoons that are available for streaming. Some of the cartoons now carry a disclaimer as well, but it goes further than Disney’s statement. Rather than refer to vague “cultural depictions,” the Warner Bros. statement calls its own cartoons out for “ethnic and racial prejudices.”
“While these cartoons do not represent today’s society, they are being presented as they were originally created, because to do otherwise would be the same as claiming these prejudices never existed,” the statement reads.
At times, Disney has disavowed a movie entirely.
Song of the South, from 1946, which won an Oscar for the song “Zip-A-Dee-Doo-Dah,” was never released for home video and hasn’t been shown theatrically for decades, due to its racist representation of the plantation worker Uncle Remus and other characters. It isn’t included in Disney+, either.
Disney and Warner Bros. did not respond to requests for comment.
Sonny Skyhawk, an actor and producer who created the group American Indians in Film and Television, also found the two-sentence disclaimer lacking.
What would serve minority groups better than any disclaimer is simply offering them opportunities to tell their own stories on a platform like Disney+, Skyhawk said. He said that when he talks to young Indian kids, “the biggest negative is they don’t see themselves represented in America.”
___
Associated Press writer Terry Tang in Phoenix contributed to this report.