Monday, 31 August 2020

New story in Technology from Time: Slowly Losing Your Mind in Lockdown? 5 Apps to Boost Your Mental Health



It should come as no surprise to learn being stuck inside for months on end with minimal human contact is not good for your well-being. As the COVID-19 pandemic continues to disrupt any semblance of normalcy throughout the U.S. and elsewhere, many people are feeling the effects of reduced employment and other disruptions of daily life—compounded by more visible instances of targeted police brutality and racial discrimination.

If you’re stressed out, exhausted by the stream of bad news, or just fell off whatever good habits you had in 2019, here’s how you can use your mobile device to get back on track. With apps that make chores fun, simple meditation tools, or services to address your mental health issues, you can, maybe, better prepare yourself for whatever else this year has in store.

Get your sleep schedule back on track with Pzizz

Platform: iOS, Android

Pzizz

There’s a good chance you’ve got a lot on your mind right now—which means counting sheep might not cut it when it comes to getting to sleep, and staring at your phone while doomscrolling is almost certainly even worse. And while there are a handful of apps designed to track your sleep, getting one meant to help you get to bed is just as important.

Pzizz is a sleep app that uses audio cues based on sleep research to help you fall asleep. It uses a mixture of speech, music, and audio to get you relaxed and prime your body for some down time, be it for a few minutes or a whole night. You can adjust the mix as well, leaning toward a more talkative or musical sleep aid for the allotted time period. Subscribing to the premium version of the app nets you access to a wider variety of sounds and guided sleep experiences.

Gamify your routines with Habitica

Platform: iOS, Android, Web

Habitica

If you need a little motivation to get done what you need to get done on a daily basis, and don’t mind adding a little fantastical vibe to the mix, try out Habitica, a task management and to-do list service that gamifies the work you accomplish. You create an RPG-esque character, which “defeats enemies” and levels up whenever you confirm that you’ve accomplished on of your IRL tasks—whether those are daily activities, errands to run, or habits to build. You can play by yourself or team up with friends for a more social element (and to add accountability to the mix); in either case, you can obtain prizes and gear for your fictional avatar by checking off boxes on your to-do list.

Reflect for a moment with Enso

Platform: iOS

Enso

If you’re like me, and just want to practice sitting for a few minutes with no distractions, you should try out Enso. It’s a minimal but elegant iOS meditation app perfect for both beginning students or experienced practitioners. There are no voices to distract you, and no music to focus on or tolerate. Just set a timer, hit start, and wait until it runs out.

You can customize your session with multiple bells to signify prep time, sitting time, and intervals for those engaging in a more advanced meditation practice. Buying Enso’s $2.99 pro version will net you some much-needed features, like Apple Health integration, an in-app audio player for custom meditation tunes, and extra alert tones you can pick to ease yourself in and out of your sitting practice.

For some good bedtime white noise, use Dark Noise

Platform: iOS

Dark Noise

Trying to read a book or focus on some work while the outside world honks, shouts, and distracts is no fun. That’s why white noise is so useful, drowning out other sounds with a more predictable, familiar tone. That’s what Dark Noise is for.

The app features a wide array of sounds, from white, brown, pink, and grey noises, to heavy rains and waterfalls, crickets, wind chimes, and coffee shops. With such a selection, you’re sure to find a noise to keep you distracted, focused, or drowsy—whatever you need. And there’s a timer, so you can have the app shut down on its own after you finish work (or fall asleep).

Talk to someone with BetterHelp

Platform: iOS, Android, Web

Betterhelp

Everyone needs someone to talk to—especially now. With in-person therapy currently out of reach for many thanks to the coronavirus, those seeking mental-health treatment might want to consider BetterHelp. Using the app, you can speak to a licensed psychologist or counselor via text, phone, or video. With no insurance necessary, pricing ranges from $40 to $70 per month, and there are over 10,000 therapists and counselors—all with over three years of therapy experience—to choose from (you’ll take a quiz to see which one is the best fit for you).

Sunday, 30 August 2020

Fox News Breaking News Alert

Fox News Breaking News Alert

Six people shot, including one fatally outside a Chicago pancake restaurant

08/30/20 12:47 PM

Friday, 28 August 2020

Fox News Breaking News Alert

Fox News Breaking News Alert

Accused Kenosha shooter's lawyer claims self-defense

08/28/20 4:25 PM

Fox News Breaking News Alert

Fox News Breaking News Alert

President Trump grants Alice Johnson full pardon

08/28/20 12:20 PM

Fox News Breaking News Alert

Fox News Breaking News Alert

Rand Paul calls for FBI arrests, investigation into 'mob' he believes 'would have killed us' outside White House

08/28/20 6:44 AM

New story in Technology from Time: You Can Block the Music of Problematic Artists From Playing on Your Music App. Here’s How.



If you’re a fan of streaming music, personalized playlists, and mixes made “just for you,” you’ve no doubt run into some boring, bad and even downright offensive songs you’d rather not hear again. And if you’re familiar with the news surrounding today’s most popular artists, you probably have a running list of artists who you’d rather never hear again—no matter how good their newest single may be—because of their misogyny, or racism, or other problematic behavior.

While you can’t hit fast-forward on your car radio, you can alter how your streaming service recommends songs to you and force it to never again play the tracks or artists you want out of your life—the extent of that control depends, though, on which streaming service you choose. Here’s how each major streaming service handles blocking and filtering artists from reaching your ears.

Amazon Music

Amazon Music, the default music service on its Echo devices, offers you both customized playlists and radio stations based on an artist or song you pick. It doesn’t let you filter or block artists from said playlists or stations, but you can upvote or downvote songs in radio stations to better personalize your listening experience.

You can, however, block songs with explicit language in them by hitting the three-dot menu icon in the “My Music” tab and enabling the “Block Explicit Songs” option. Amazon did not respond to TIME’s inquiry about blocking or filtering artists any further.

Tidal

Tidal, known for its catalog of high-bitrate music (for better audio quality) and Beyoncé’s visual albums, makes it relatively easy to block artists or even particular songs, and gives you an easy way to manage your list of expunged musicians once you’ve made one.

While you can’t block an artist directly from their artist page, you can block them (or a particular song) from their Artist or Track radio playlists, or from your “My Mix” playlist. If you know exactly who you want to cull from your listening experience, the quickest way to get it done is to visit the artist’s profile anyway, hit the radio button next to the artist name, tap one of their songs, and hit the block button at the bottom of the Now Playing screen.

Should you change your mind, you can hit the Settings icon in your “My Collection” tab, then scroll down to view and unblock all your selected artists and songs.

Tidal lets you block both tracks and artists, though you can only do it from the Now Playing screen in playlists or radio stations.

Apple Music

Apple Music, the company’s streaming-service alternative to its iTunes Store, features both Apple-curated playlists and custom radio stations that pick songs based on your listening history. But Apple Music won’t let you block an artist or filter their songs out of playlists; it does, however, enable you to adjust the app’s recommendation system based on how you rate songs.

You can vote to “love” or “dislike” songs in Apple Music, which it takes into account when building playlists based on your listening history. Apple did not respond to TIME’s inquiry about blocking or filtering artists any further.

Spotify

Of all the streaming services we looked at, Spotify has the most straightforward method of blocking artists from appearing on playlists and radio stations. While you can’t block specific songs, you can block an artist’s work by visiting their profile, hitting the three-dot menu icon, and selecting “Don’t play this artist.” After that, you won’t encounter them in any playlists or radio stations.

A Spotify artist page, where you can block an artist from appearing in playlists and radio stations.

Pandora

Pandora’s personalized radio stations are perfect for discovering new artists and songs for your socially distant summer fun. But when it comes to dismissing artists you no longer want to hear, you only have one option: downvote them. That won’t entirely block the artist (or even that specific track), but it will reduce how often the artist appears in your radio stations.

In short, if you’re using Pandora, be sure to give the artist you want to avoid a thumbs down rating whenever possible to decrease the likelihood they pop up again in your stations.

YouTube Music

YouTube Music, parent company Alphabet’s replacement for its Google Play Music service (scheduled to shut down completely this December), doesn’t offer much in terms of artist control. Currently, YouTube Music does not allow users to filter or block artists.

Fox News Breaking News Alert

Fox News Breaking News Alert

Japanese PM Abe to resign over health issues: reports

08/27/20 11:37 PM

Fox News Breaking News Alert

Fox News Breaking News Alert

Sen. Rand Paul says he was attacked by mob after RNC

08/27/20 11:27 PM

Thursday, 27 August 2020

Fox News Breaking News Alert

Fox News Breaking News Alert

President Trump addresses the RNC to accept the presidential nomination after introduction by Ivanka

08/27/20 7:23 PM

New story in Technology from Time: Facebook’s Ties to India’s Ruling Party Complicate Its Fight Against Hate Speech



In July 2019, Alaphia Zoyab was on a video call with Facebook employees in India, discussing some 180 posts by users in the country that Avaaz, the watchdog group where she worked, said violated Facebook’s hate speech rules. But half way through the hour-long meeting, Shivnath Thukral, the most senior Facebook official on the call, got up and walked out of the room, Zoyab says, saying he had other important things to do.

Among the posts was one by Shiladitya Dev, a lawmaker in the state of Assam for Prime Minister Narendra Modi’s Hindu nationalist Bharatiya Janata Party (BJP). He had shared a news report about a girl being allegedly drugged and raped by a Muslim man, and added his own comment: “This is how Bangladeshi Muslims target our [native people] in 2019.” But rather than removing it, Facebook allowed the post to remain online for more than a year after the meeting, until TIME contacted Facebook to ask about it on Aug. 21. “We looked into this when Avaaz first flagged it to us, and our records show that we assessed it as a hate speech violation,” Facebook said in a statement to TIME. “We failed to remove upon initial review, which was a mistake on our part.”

Thukral was Facebook’s public policy director for India and South Asia at the time. Part of his job was lobbying the Indian government, but he was also involved in discussions about how to act when posts by politicians were flagged as hate speech by moderators, former employees tell TIME. Facebook acknowledges that Thukral left the meeting, but says he never intended to stay for its entirety, and joined only to introduce Zoyab, whom he knew from a past job, to his team. “Shivnath did not leave because the issues were not important,” Facebook said in the statement, noting that the company took action on 70 of the 180 posts presented during the meeting.

India Facebook
Eric Miller—World Economic ForumShivnath Thukral at the Moving to Better Ground session during the India Economic Summit in Mumbai, November, 2011.

The social media giant is under increasing scrutiny for how it enforces its hate speech policies when the accused are members of Modi’s ruling party. Activists say some Facebook policy officials are too close to the BJP, and accuse the company of putting its relationship with the government ahead of its stated mission of removing hate speech from its platform—especially when ruling-party politicians are involved. Thukral, for instance, worked with party leadership to assist in the BJP’s 2014 election campaign, according to documents TIME has seen.

Facebook’s managing director for India, Ajit Mohan, denied suggestions that the company had displayed bias toward the BJP in an Aug. 21 blog post titled, “We are open, transparent and non-partisan.” He wrote: “Despite hailing from diverse political affiliations and backgrounds, [our employees] perform their respective duties and interpret our policies in a fair and non-partisan way. The decisions around content escalations are not made unilaterally by just one person; rather, they are inclusive of views from different teams and disciplines within the company.”

Facebook published the blog post after the Wall Street Journal, citing current and former Facebook employees, reported on Aug.14 that the company’s top policy official in India, Ankhi Das, pushed back against other Facebook employees who wanted to label a BJP politician a “dangerous individual” and ban him from the platform after he called for Muslim immigrants to be shot. Das argued that punishing the state lawmaker, T. Raja Singh, would hurt Facebook’s business prospects in India, the Journal reported. (Facebook said Das’s intervention was not the sole reason Singh was not banned, and that it was still deciding if a ban was necessary.)

Read more: Can the World’s Largest Democracy Endure Another Five Years of a Modi Government?

Those business prospects are sizeable. India is Facebook’s largest market, with 328 million using the social media platform. Some 400 million Indians also use Facebook’s messaging service WhatsApp — a substantial chunk of the country’s estimated 503 million internet users. The platforms have become increasingly important in Indian politics; after the 2014 elections, Das published an op-ed arguing that Modi had won because of the way he leveraged Facebook in his campaign.

But Facebook and WhatsApp have also been used to spread hate speech and misinformation that have been blamed for helping to incite deadly attacks on minority groups amid rising communal tensions across India—despite the company’s efforts to crack down. In February, a video of a speech by BJP politician Kapil Mishra was uploaded to Facebook, in which he told police that unless they removed mostly-Muslim protesters occupying a road in Delhi, his supporters would do it themselves. Violent riots erupted within hours. (In that case, Facebook determined the video violated its rules on incitement to violence and removed it.)

WhatsApp, too, has been used with deadly intent in India — for example by cow vigilantes, Hindu mobs that have attacked Muslims and Dalits accused of killing cows, an animal sacred in Hinduism. At least 44 people, most of them Muslims, were killed by cow vigilantes between May 2015 and December 2018, according to Human Rights Watch. Many cow vigilante murders happen after rumors spread on WhatsApp, and videos of lynchings and beatings are often shared via the app too.

Read more: How the Pandemic is Reshaping India

TIME has learned that Facebook, in an effort to evaluate its role in spreading hate speech and incitements to violence, has commissioned an independent report on its impact on human rights in India. Work on the India audit, previously unreported, began before the Journal published its story. It is being conducted by the U.S. law firm Foley Hoag and will include interviews with senior Facebook staff and members of civil society in India, according to three people with knowledge of the matter and an email seen by TIME. (A similar report on Myanmar, released in 2018, detailed Facebook’s failings on hate speech that contributed to the Rohingya genocide there the previous year.) Facebook declined to confirm the report.

But activists, who have spent years monitoring and reporting hate speech by Hindu nationalists, tell TIME that they believe Facebook has been reluctant to police posts by members and supporters of the BJP because it doesn’t want to pick fights with the government that controls its largest market. The way the company is structured exacerbates the problem, analysts and former employees say, because the same people responsible for managing the relationship with the government also contribute to decisions on whether politicians should be punished for hate speech.

“A core problem at Facebook is that one policy org is responsible for both the rules of the platform and keeping governments happy,” Alex Stamos, Facebook’s former chief security officer, tweeted in May. “Local policy heads are generally pulled from the ruling political party and are rarely drawn from disadvantaged ethnic groups, religious creeds or castes. This naturally bends decision-making towards the powerful.”

Some activists have grown so frustrated with the Facebook India policy team that they’ve begun to bypass it entirely in reporting hate speech. Following the call when Thukral walked out, Avaaz decided to begin reporting hate speech directly to Facebook’s company headquarters in Menlo Park, Calif. “We found Facebook India’s attitude utterly flippant, callous, uninterested,” says Zoyab, who has since left Avaaz. Another group that regularly reports hate speech against minorities on Facebook in India, which asked not to be named out of fear for the safety of its staffers, said it has been doing the same since 2018. In a statement, Facebook acknowledged some groups that regularly flag hate speech in India are in contact with Facebook headquarters, but said that did not change the criteria by which posts were judged to be against its rules.

Read more: Facebook Says It’s Removing More Hate Speech Than Ever Before. But There’s a Catch

The revelations in the Journal set off a political scandal in India, with opposition politicians calling for Facebook to be officially investigated for alleged favoritism toward Modi’s party. And the news caused strife within the company too: In an internal open letter, Facebook employees called on executives to denounce “anti-Muslim bigotry” and do more to ensure hate speech rules are applied consistently across the platform, Reuters reported. The letter alleges that there are no Muslim employees on the India policy team; in response to questions from TIME, Facebook said it was legally prohibited from collecting such data.

Facebook friends in high places

While it is common for companies to hire lobbyists with connections to political parties, activists say the history of staff on Facebook’s India policy team, as well as their incentive to keep the government happy, creates a conflict of interest when it comes to policing hate speech by politicians. Before joining Facebook, Thukral had worked in the past on behalf of the BJP. Despite this, he was involved in making decisions about how to deal with politicians’ posts that moderators flagged as violations of hate speech rules during the 2019 elections, the former employees tell TIME. His Facebook likes include a page called “I Support Narendra Modi.”

Former Facebook employees tell TIME they believe a key reason Thukral was hired in 2017 was because he was seen as close to the ruling party. In 2013, during the BJP’s eventually successful campaign to win national power at the 2014 elections, Thukral worked with senior party officials to help run a pro-BJP website and Facebook page. The site, called Mera Bharosa (“My Trust” in Hindi) also hosted events, including a project aimed at getting students to sign up to vote, according to interviews with people involved and documents seen by TIME. A student who volunteered for a Mera Bharosa project told TIME he had no idea it was an operation run in coordination with the BJP, and that he believed he was working for a non-partisan voter registration campaign. According to the documents, this was a calculated strategy to hide the true intent of the organization. By early 2014, the site changed its name to “Modi Bharosa” (meaning “Modi Trust”) and began sharing more overtly pro-BJP content. It is not clear whether Thukral was still working with the site at that time.

In a statement to TIME, Facebook acknowledged Thukral had worked on behalf of Mera Bharosa, but denied his past work presented a conflict of interest because multiple people are involved in significant decisions about removing content. “We are aware that some of our employees have supported various campaigns in the past both in India and elsewhere in the world,” Facebook said as part of a statement issued to TIME in response to a detailed series of questions. “Our understanding is that Shivnath’s volunteering at the time focused on the themes of governance within India and are not related to the content questions you have raised.”

Now, Thukral has an even bigger job. In March 2020, he was promoted from his job at Facebook to become WhatsApp’s India public policy director. In the role, New Delhi tech policy experts tell TIME, one of Thukral’s key responsibilities is managing the company’s relationship with the Modi government. It’s a crucial job, because Facebook is trying to turn the messaging app into a digital payments processor — a lucrative idea potentially worth billions of dollars.

In April, Facebook announced it would pay $5.7 billion for a 10% stake in Reliance Jio, India’s biggest telecoms company, which is owned by India’s richest man, Mukesh Ambani. On a call with investors in May, Facebook CEO Mark Zuckerberg spoke enthusiastically about the business opportunity. “With so many people in India engaging through WhatsApp, we just think this is going to be a huge opportunity for us to provide a better commerce experience for people, to help small businesses and the economy there, and to build a really big business ourselves over time,” he said, talking about plans to link WhatsApp Pay with Jio’s vast network of small businesses across India. “That’s why I think it really makes sense for us to invest deeply in India.”

Read more: How Whatsapp Is Fueling Fake News Ahead of India’s Elections

But WhatsApp’s future as a payments application in India depends on final approval from the national payments regulator, which is still pending. Facebook’s hopes for expansion in India have been quashed by a national regulator before, in 2016, when the country’s telecoms watchdog said Free Basics, Facebook’s plan to provide free Internet access for only some sites, including its own, violated net neutrality rules. One of Thukral’s priorities in his new role is ensuring that a similar problem doesn’t strike down Facebook’s big ambitions for WhatsApp Pay.

‘No foreign company in India wants to be in the government’s bad books’

While the regulator is technically independent, analysts say that Facebook’s new relationship with the wealthiest man in India will likely make it much easier to gain approval for WhatsApp Pay. “It would be easier now for Facebook to get that approval, with Ambani on its side,” says Neil Shah, vice president of Counterpoint Research, an industry analysis firm. And goodwill from the government itself is important too, analysts say. “No foreign company in India wants to be in the government’s bad books,” says James Crabtree, author of The Billionaire Raj. “Facebook would very much like to have good relations with the government of India and is likely to think twice about doing things that will antagonize them.”

The Indian government has shown before it is not afraid to squash the dreams of foreign tech firms. In July, after a geopolitical spat with China, it banned dozens of Chinese apps including TikTok and WeChat. “There has been a creeping move toward a kind of digital protectionism in India,” Crabtree says. “So in the back of Facebook’s mind is the fact that the government could easily turn against foreign tech companies in general, and Facebook in particular, especially if they’re seen to be singling out major politicians.”

With hundreds of millions of users already in India, and hundreds of millions more who don’t have smartphones yet but might in the near future, Facebook has an incentive to avoid that possibility. “Facebook has said in the past that it has no business interest in allowing hate speech on its platform,” says Chinmayi Arun, a resident fellow at Yale Law School, who studies the regulation of tech platforms. “It’s evident from what’s going on in India that this is not entirely true.”

Facebook says it is working hard to combat hate speech. “We want to make it clear that we denounce hate in any form,” said Mohan, Facebook’s managing director in India, in his Aug. 21 blog post. “We have removed and will continue to remove content posted by public figures in India when it violates our Community Standards.”

But scrubbing hate speech remains a daunting challenge for Facebook. At an employee meeting in June, Zuckerberg highlighted Mishra’s February speech ahead of the Delhi riots, without naming him, as a clear example of a post that should be removed. The original video of Mishra’s speech was taken down shortly after it was uploaded. But another version of the video, with more than 5,600 views and a long list of supportive comments underneath, remained online for six months until TIME flagged it to Facebook in August.

Fox News Breaking News Alert

Fox News Breaking News Alert

WATCH LIVE: RNC's final night to feature Dr. Ben Carson, Ivanka Trump, Alice Johnson and a high-stakes speech from President Tru

08/27/20 5:30 PM

Fox News Breaking News Alert

Fox News Breaking News Alert

Mets, Marlins players walk off Citi Field in protest

08/27/20 5:02 PM

New story in Technology from Time: Video Games May Be Key to Keeping World War II Memory Alive. Here Are 5 WWII Games Worth Playing, According to a Historian



The 75th anniversary of Japan formally surrendering to the U.S. aboard the battleship USS Missouri on Sept. 2, 1945, arrives at a moment when the question of how the war is remembered feels more necessary than ever. Veterans’ stories, books, movies and TV shows have kept memories of the war alive for the last 75 years, but how will those stories be told when there are fewer people around who lived through those era-defining years?

Recently, some people in younger generations have turned to a perhaps surprising source for World War II stories: video games. Games have become more realistic not only in terms of technological advancements, but also in terms of featuring real people and, at least in term of blockbuster games like Medal of Honor and Call of Duty, getting input from real experts on military history.

For example, the upcoming virtual-reality game Medal of Honor: Above and Beyond will feature documentary shorts, and creators interviewed WWII veterans about their wartime experiences to inform the set, which includes missions across Europe and in Tunisia. Inside their headsets, players will walk in the boots of a combat engineer recruited for espionage work by the Office of Strategic Services, which was a real U.S. intelligence agency during World War II and a precursor to the CIA. On Thursday, at the virtual Gamescom convention, Respawn Entertainment unveiled a new trailer for the game, which will be released this holiday season.

Medal of Honor: Above and Beyond
Respawn EntertainmentA scene from the upcoming virtual-reality game Medal of Honor: Above and Beyond.

One of the most popular WWII video game franchises, Medal of Honor began with educational aspirations. As Peter Hirschmann, who worked on the original 1999 game and is the game director for Medal of Honor: Above and Beyond, recalled to The Hollywood Reporter earlier this year, Saving Private Ryan director Steven Spielberg knew that his R-rated movie wasn’t for kids but expressed a wish that there were more popular culture available to spark interest in the war’s history among younger viewers. “He then had the foresight to see that one of the dominant forms of entertainment emerging was games, so he laid it out: ‘I want to make a WWII game that kids can play to introduce them to these stories,'” Hirschmann said. “He was very specific about wanting to call it Medal of Honor, because that award represents going above and beyond the call of duty.”

So far, given that the U.S. is one of the biggest video game markets in the world, the U.S. perspective on the war dominates in video games too; as is the case with World War II movies, most are set in Western Europe. In fact, in 2013, Russian players blasted Company of Heroes 2 for repeating American stereotypes of the Eastern Front.

Ultimately, history games can spark interest in learning more, says Bob Whitaker, a professor of History at Collin College and host of the podcast History Respawned, where historians talk about history-themed video games. His own passion for this subject dates back to playing Civilization II in the mid-1990s, which inspired him to create a “mod” (a player-made tweak) re-creating his grandfather’s experience in World War II as a pilot who flew missions over the Himalayas.

And he wasn’t the only one. “Recently with the centenary of the end of World War I, I can’t tell you how many conversations I had with students and other scholars about how games they played like Battlefield 1 or Valiant Hearts: The Great War portrayed the First World War,” he says. “Going forward you are going to see the same sort of conversations about Second World War games.”

In the next 75 years, he hopes more historians will see the value of games as Spielberg did. “I look at games as being in a similar positions as motion pictures were in the beginning of the 20th century. Games will be taken much more seriously in the 21st century; they’re going to carry much more historical weight,” says Whitaker. “Games are going to be a part of the ways in which we remember the past going forward. Historians have to offer a helping hand in case players want to know more.”

Get your history fix in one place: sign up for the weekly TIME History newsletter

Below, Whitaker picks five other titles that show how games are telling the story of World War II:

Through the Darkest of Times (2019)

“In Through the Darkest of Times, developed by [Berlin-based] Paintbucket Games, you play as a German resistance group living in Berlin during the Second World War. The White Rose was a group of German students who attempted to resist the Nazi regime. Students were executed, and the game is trying to tell that story to a certain extent. Games are typically about player empowerment, living out a power fantasy. Your missions don’t often involve violence, but instead the weapons of the weak: sabotage, graffiti, and spreading leaflets. The game exposes players to a history most people don’t know while the game’s mechanics illustrate for the player how difficult resistance to Nazism often was for ordinary people.”

Attentat 1942 (2017)

“Developed by Charles University in Prague, it’s about the Nazi occupation of Czechoslovakia, featuring survivors. It only takes about two to three hours to finish so it’s the type of fun, cheap game you could play in an afternoon. A number of historians helped to develop the game, so stands out for its fidelity to the history of the conflict. It marries compelling game mechanics with authentic history.”

Call of Duty: WWII (2017)

“Most Second World War games don’t mention the major tragedies or anything related to the Holocaust.

While Call of Duty: WWII does fall into clichés and traps with WWII video games where you’re getting a lot of bombast, blockbuster set pieces, at the same time, the developers at Activision are doing something rather brave, which is bringing up the Holocaust in a major AAA video title. It culminates in a mission where you are liberating a concentration camp. There is no violence. You’re solemnly going through the remains at the concentration camp.”

Wolfenstein: The New Order (2014)

“Wolfenstein: The New Order is a pulpy, grindhouse sci-fi version of World War II, set in the early 1960s when the Nazis have won the war. Your player, ‘B.J.’ Blazkowicz, goes into a prison camp, and in course of the mission, you see Jews, people of color, other enemy groups being treated poorly in these camps. You see gas chambers, crematoriums, bodies are being burned.

We’re moving into an era where survivors of the Holocaust are passing away, and you’ve got to rely on secondary sources. As we get further away from the Second World War, it’s really important to remind players of the crimes the Nazis committed.”

Hearts of Iron IV (2016)

“Hearts of Iron IV attempts to replicate as accurately as possible the starting conditions for various world powers in the late 1930s, giving you the opportunity to either replicate history or pursue some sort of counterfactual scenario in which you are attempting to change the outcome of the war. People use that to create more historically-realistic scenarios, but it’s also controversial because it’s popular among groups of ‘modders’ who revel in racist and ethnonationalist counterfactual histories. The base game is solid, but some of the player-created content can be very disturbing. Games are not like a book or movie. As the audience, you’re not simply receiving the game. You can add onto the game and manipulate it.”

Fox News Breaking News Alert

Fox News Breaking News Alert

Trump, in convention speech excerpts, to slam Biden for 'extreme' agenda

08/27/20 10:00 AM

New story in Technology from Time: TikTok’s CEO Resigns Amid U.S. Pressure to Sell the Video App



HONG KONG (AP) — TikTok CEO Kevin Mayer resigned Thursday amid U.S. pressure for its Chinese owner to sell the popular video app, which the White House says is a security risk.

In a letter to employees, Mayer said that his decision to leave comes after the “political environment has sharply changed.”

His resignation follows President Donald Trump’s order to ban TikTok unless its parent company, ByteDance, sells its U.S. operations to an American company within 90 days.

“I have done significant reflection on what the corporate structural changes will require, and what it means for the global role I signed up for,” he said in the letter. “Against this backdrop, and as we expect to reach a resolution very soon, it is with a heavy heart that I wanted to let you all know that I have decided to leave the company.”

Bytedance is currently in talks with Microsoft for the U.S. firm to buy TikTok’s U.S. operations.

Mayer, a former Disney executive, joined TikTok as CEO in May.

TikTok thanked Mayer.

“We appreciate that the political dynamics of the last few months have significantly changed what the scope of Kevin’s role would be going forward, and fully respect his decision,” the company said in a statement.

ByteDance launched TikTok in 2017, then bought Musical.ly, a video service popular with teens in the U.S. and Europe, and combined the two. A twin service, Douyin, is available for Chinese users.

TikTok gained immense popularity via its fun, goofy videos and ease of use, and has hundreds of millions of users globally.

But its Chinese ownership has raised concerns about potential censorship of videos, including those critical of the Chinese government, and the risk Beijing may access user data.

Earlier this month, Trump ordered a sweeping but unspecified ban on dealings with the Chinese owners of consumer apps TikTok and WeChat as the U.S. heightens scrutiny of Chinese technology companies, citing concerns that they may pose a threat to national security.

Fox News Breaking News Alert

Fox News Breaking News Alert

Hurricane Laura makes landfall as a Category 4 storm

08/26/20 11:18 PM

Wednesday, 26 August 2020

Fox News Breaking News Alert

Fox News Breaking News Alert

Vice President Pence highlights Night Three of the Republican National Convention, which will also feature Kayleigh McEnany, Dan

08/26/20 5:30 PM

Fox News Breaking News Alert

Fox News Breaking News Alert

Jacob Blake had a knife in his car when he was shot by Kenosha, Wisc., police, according to the Wisconsin DOJ, which has now nam

08/26/20 5:09 PM

Fox News Breaking News Alert

Fox News Breaking News Alert

Lezmond Mitchell executed in Indiana

08/26/20 4:06 PM

Fox News Breaking News Alert

Fox News Breaking News Alert

NBA announces tonight’s playoff games will be postponed

08/26/20 2:31 PM

Fox News Breaking News Alert

Fox News Breaking News Alert

Hurricane Laura, now Category 4, may bring 'unsurvivable' storm surge for Texas-Louisiana border

08/26/20 10:53 AM

Fox News Breaking News Alert

Fox News Breaking News Alert

Trump says he will send federal law enforcement, National Guard to Kenosha after Jacob Blake shooting

08/26/20 10:50 AM

Fox News Breaking News Alert

Fox News Breaking News Alert

Juvenile arrested in deadly shooting during Kenosha unrest, Illinois police say

08/26/20 10:41 AM

New story in Technology from Time: Artificial Intelligence Is Here To Calm Your Road Rage



I am behind the wheel of a Nissan Leaf, circling a parking lot, trying not to let the day’s nagging worries and checklists distract me to the point of imperiling pedestrians. Like all drivers, I am unwittingly communicating my stress to this vehicle in countless subtle ways: the strength of my grip on the steering wheel, the slight expansion of my back against the seat as I breathe, the things I mutter to myself as I pilot around cars and distracted pedestrians checking their phones in the parking lot.

“Hello, Corinne,” a calm voice says from the audio system. “What’s stressing you out right now?”

The conversation that ensues offers a window into the ways in which artificial intelligence could transform our experience behind the wheel: not by driving the car for us, but by taking better care of us as we drive.

Before coronavirus drastically altered our routines, three-quarters of U.S. workers—some 118 million people—commuted to the office alone in a car. From 2009 to 2019, Americans added an average of two minutes to their commute each way, according to U.S. Census data. That negligible daily average is driven by a sharp increase in the number of people making “super commutes” of 90 minutes or more each way, a population that increased 32% from 2005 to 2017. The long-term impact of COVID-19 on commuting isn’t clear, but former transit riders who opt to drive instead of crowding into buses or subway cars may well make up for car commuters who skip at least some of their daily drives and work from home instead.

Longer commutes are associated with increased physical health risks like high blood pressure, obesity, stroke and sleep disorders. A 2017 research project at the University of the West of England found that every extra minute of the survey respondents’ commutes correlated with lower job and leisure time satisfaction. Adding 20 minutes to a commute, researchers found, has the same depressing effect on job satisfaction as a 19% pay cut.

Switching modes of transit can offer some relief: people who walk, bike or take trains to work tend to be happier commuters than those who drive (and, as a University of Amsterdam study recently found, they tend to miss their commute more during lockdown). But reliable public transit is not universally available, nor are decent jobs always close to affordable housing.

Technology has long promised that an imminent solution is right around the corner: self-driving cars. In the near future, tech companies claim, humans won’t drive so much as be ferried about by fully autonomous cars that will navigate safely and efficiently to their destinations, leaving the people inside free to sleep, work or relax as easily as if they were on their own couch. A commute might be a lot less stressful if you could nap the whole way there, or get lost in a book or Netflix series without having to worry about exits or collisions.

Google executives went on the record claiming self-driving cars would be widely available within five years in 2012; they said the same thing again in 2015. Elon Musk throws out ship dates for fully autonomous Teslas as often as doomsday cult leaders reschedule the end of the world. Yet these forecasted utopias have still not arrived.

The majority of carmakers have walked back their most ambitious estimates. It will likely be decades before such cars are a reality for even a majority of drivers. In the meantime, the car commute remains a big, unpleasant, unhacked chunk of time in millions of Americans’ daily lives.

A smaller and less heralded group of researchers is working on how cars can make us happier while we drive them. It may be decades before artificial intelligence can completely take over piloting our vehicles. In the short run, however, it may be able to make us happier—and healthier—pilots.


Lane changes, left turns, four-way stops and the like are governed by rules, but also rely on drivers’ making on-the-spot judgments with potentially deadly consequences. These are also the moments where driver stress spikes.

Many smart car features currently on the market give drivers data that assist with these decisions, like sensors that alert them when cars are in their blind spots or their vehicle is drifting out of its lane.

Another thing that causes drivers stress is uncertainty. One 2015 study found commuters who drove themselves to work were more stressed by the journey than were transit riders or other commuters, largely because of the inconsistency that accidents, roadwork and other traffic snarls caused in their schedules. But even if we can’t control the variables that affect a commute, we’re calmer if we can at least anticipate them—hence the popularity of real-time arrival screens at subway and bus stops.

The Beaverton, Ore.-based company Traffic Technology Services (TTS) makes a product called the Personal Signal Assistant, a platform that enables cars to communicate with traffic signals in areas where that data is publicly available. TTS’s first client, Audi, used the system to build a tool that counts down the remaining seconds of a red light (visually, on the dashboard) when a car is stopped at one, and suggests speed modifications as the car approaches a green light. The tool was designed to keep traffic flowing—no more honking at distracted drivers who don’t notice the light has turned green. But users also reported a marked decrease in stress. At the moment, the technology works in 26 North American metropolitan areas and two cities in Europe.

TTS has 60 full- and part-time employees in the U.S. and Germany, and recently partnered with Lamborghini, Bentley and a handful of corporate clients. Yet CEO Thomas Bauer says it can be hard to interest investors in technologies that focus on improving human drivers’ experience instead of just rendering them obsolete. “We certainly don’t draw the same excitement with investors as [companies focused on] autonomous driving,” Bauer says. “What we do is not quite as exciting because it doesn’t take the driver out of the picture just yet.”


Pablo Paredes, an instructor of radiology and psychiatry at the Stanford School of Medicine, is the director of the school’s Pervasive Wellbeing Technology Lab. Situated in a corner of a cavernous Palo Alto, Calif., office building that used to be the headquarters of the defunct health-technology company Theranos, the lab looks for ways to rejigger the habits and objects people use in their everyday lives to improve mental and physical health. Team members don’t have to look far for reminders of what happens when grandiose promises aren’t backed up by data: Theranos’ circular logo is still inlaid in brass in the building’s marble-floored atrium.

It can be hard to tell the lab’s experiments from its standard-issue office furniture. To overcome the inertia that often leads users of adjustable-height desks to sit more often than stand, one of the workstations in the team’s cluster of cubicles has been outfitted with a sensor and mechanical nodule that make it rise and lower at preset intervals, smoothly enough that a cup of coffee won’t spill. In early trials, users particularly absorbed in their work just kept typing as the desk rose up and slowly stood along with it.

But the millions of hours consumed in the U.S. each day by the daily drive to work hold special fascination for Paredes. He’s drawn to the challenge of transforming a part of the day generally thought of as detrimental to health into something therapeutic. “The commute for me is the big elephant in the room,” he says. “There are very simple things that we’re overlooking in normal life that can be greatly improved and really repurposed to help a lot of people.”

In a 2018 study, Paredes and his colleagues found that it’s possible to infer a driver’s muscle tension—a proxy for stress—from the movement of their hands on a car’s steering wheel. They’re now experimenting with cameras that detect neck tension by noting the subtle changes in the angle of a driver’s head as it bobs with the car’s movements.

The flagship of the team’s mindful-commuting project is the silver-colored Nissan Leaf in their parking lot. The factory-standard electric vehicle has been tricked out with a suite of technologies designed to work together to decrease a driver’s stress.

On a test drive earlier this year, a chatbot speaking through the car’s audio system offered me the option of engaging in a guided breathing exercise. When I verbally agreed, the driver’s seatback began vibrating at intervals, while the voice instructed me to breathe along with its rhythm.

The lab published the results of a small study earlier this year showing that the seat-guided exercise reduced driver stress and breathing rates without impairing performance. They are now experimenting with a second vibrating system to see if lower-frequency vibrations could be used to slow breathing rates (and therefore stress) without any conscious effort on the driver’s part.

The goal, eventually, is a mass-market car that can detect an elevation in a driver’s stress level, via seat and steering wheel sensors or the neck-tension cameras. It would then automatically engage the calming-breath exercise, or talk through a problem or tell a joke to ease tension, using scripts developed with the input of cognitive behavioral therapists.

These technologies have value even as cars’ autonomous capabilities advance, Paredes says. Even if a car is fully self-driving, the human inside will still often be a captive audience of one, encased in a private space with private worries and fears.

Smarter technologies alone aren’t the solution to commuters’ problems. The auto industry has a long history of raising drivers’ tolerance for long commutes by making cars more comfortable and attractive places to be—all the while promising a better driving experience that’s just around the corner, says Peter Norton, an associate professor of science, technology, and society at the University of Virginia and author of Fighting Traffic: The Dawn of the Motor Age in the American City. From his perspective, stress-busting seats would join radios and air conditioners as distractions from bigger discussions about planning, transit and growing inequality, all of which could offer much more value to commuters than a nicer car.

In addition, how long it will be before these latest features become widely available options is an open question. Paredes’ lab had to suspend work during the pandemic, as it’s hard to maintain social distancing while working inside of a compact sedan. TTS is in talks to expand its offerings to other automakers, and Paredes has filed patents on some of his lab’s inventions. But just because a technology is relatively easy to integrate in a car doesn’t mean it will be standard soon. The first commercially available backup cameras came on the market in 1991. Despite their effectiveness in reducing collisions, only 24% of cars on the road had them by 2016, according to the Insurance Institute for Highway Safety, and most were newer luxury vehicles. (The cameras are now required by law in all new vehicles.)

These technologies also raise new questions of inequality and exploitation. It’s one thing for a commuter to opt for a seat that calms them down after a tough day. But if you drive for a living, should the company that owns your vehicle have the right to insist that you use a seat cover that elevates your breath rate and keeps you alert at the wheel? Who owns the health data your car collects, and who gets to access it? All of the unanswered questions that self-driving technologies raise apply to self-soothing technologies as well.


Back in Palo Alto, the pandemic still weeks away, I am piloting the Leaf around the parking lot with a member of the lab gamely along for the ride in the back. The chatbot asks again what’s stressing me out. I have a deadline, I say, for a magazine article about cars and artificial intelligence.

The bot asks if this problem is “significantly” affecting my life (not really), if I’ve encountered something similar before (yep), if previous strategies could be adapted to this scenario (they can) and when I’ll be able to enter a plan to tackle this problem in my calendar (later, when I’m not driving). I do feel a little better. I talk to myself alone in the car all the time. It’s kind of nice to have the car talk back.

“Great. I’m glad you can do something about it. By breaking down a problem into tiny steps, we can often string together a solution,” the car says. “Sound good?”

Ad 1