Tuesday, 30 June 2020
Fox News Breaking News Alert
EXCLUSIVE: DHS deploys special federal unit to protect monuments over July 4 weekend amid vandalism fears
06/30/20 4:09 PM
Fox News Breaking News Alert
Mississippi governor signs bill retiring last state flag with Confederate battle emblem
06/30/20 3:24 PM
Fox News Breaking News Alert
Supreme Court strikes down ban on taxpayer funding for religious schools, in win for school choice movement
06/30/20 7:37 AM
Fox News Breaking News Alert
China passes controversial Hong Kong security law: report
06/29/20 11:49 PM
Monday, 29 June 2020
New story in Technology from Time: India Bans TikTok, Dozens More China-Linked Apps, Citing Security Concerns
(NEW DELHI) — India on Monday banned 59 apps with Chinese links, saying their activities endanger the country’s sovereignty, defense and security.
India’s decision comes as its troops are involved in a tense standoff with Chinese soldiers in eastern Ladakh in the Himalayas that started last month. India lost 20 soldiers in a June 15 clash.
Read more: China and India Try to Cool Nationalist Anger After Deadly Border Clash
The banned apps include TikTok, UC Browser WeChat and Bigo Live, as well as e-commerce platforms Club Factory and Shein, that are used in mobile and non-mobile devices connected to the Internet, according to a government statement.
“The Ministry of Information Technology has received many complaints from various sources including several reports about misuse of some mobile apps available on Android and iOS platforms for stealing and surreptitiously transmitting users’ data in an unauthorized manner to servers which have locations outside India,” the statement said.
It said there have been mounting concerns about data security and safeguarding the privacy of 1.3 billion Indians. The government said such concerns also pose a threat to sovereignty and security of the country.
The compilation of these data, its mining and profiling by elements hostile to national security and the defense of India was “a matter of very deep and immediate concern which requires emergency measures,” the statement continued.
Sunday, 28 June 2020
Fox News Breaking News Alert
Worldwide coronavirus deaths surpass 500,000 mark, Johns Hopkins University research shows
06/28/20 2:23 PM
Fox News Breaking News Alert
California governor orders bars closed in counties including Los Angeles, citing coronavirus
06/28/20 12:33 PM
Fox News Breaking News Alert
Global coronavirus infections pass 10M mark, data show
06/28/20 3:09 AM
Saturday, 27 June 2020
Fox News Breaking News Alert
Pence postpones Florida, Arizona campaign events amid increase in coronavirus cases
06/27/20 1:06 PM
Friday, 26 June 2020
Fox News Breaking News Alert
Trump signs executive order to protect American monuments, memorials and statues
06/26/20 3:34 PM
New story in Technology from Time: Facebook Will Flag Rule Violations on ‘Newsworthy’ Posts from Politicians, Including President Trump
(OAKLAND, Calif.) — Facebook says it will flag all “newsworthy” posts from politicians that break its rules, including those from President Donald Trump.
CEO Mark Zuckerberg had previously refused to take action against Trump posts suggesting that mail-in ballots will lead to voter fraud. Twitter, by contrast, slapped a “get the facts” label on them. Facebook is also banning false claims intended to discourage voting, such as stories about federal agents checking legal status at polling places. The company also said it is increasing its enforcement capacity to remove false claims about local polling conditions in the 72 hours before the U.S. election.
Shares of Facebook and Twitter dropped sharply Friday after the the giant company behind brands such as Ben & Jerry’s ice cream and Dove soap said it will halt U.S. advertising on Facebook, Twitter and Instagram through at least the end of the year.
Read more: Trump’s Attempt to Change Social Media’s Rules Is Futile Political Coercion
That European consumer-product maker, Unilever, said it took the move to protest the amount of hate speech online. Unilever said the polarized atmosphere in the United States ahead of November’s presidential election placed responsibility on brands to act.
Shares of both Facebook and Twitter fell roughly 7% following Unilever’s announcement.
The company, which is based in the Netherlands and Britain, joins a raft of other advertisers pulling back from online platforms. Facebook in particular has been the target of an escalating movement to withhold advertising dollars to pressure it to do more to prevent racist and violent content from being shared on its platform.
“We have decided that starting now through at least the end of the year, we will not run brand advertising in social media newsfeed platforms Facebook, Instagram and Twitter in the U.S.,” Unilever said. “Continuing to advertise on these platforms at this time would not add value to people and society.”
Facebook did not immediately respond to a request for comment. On Thursday, Verizon joined others in the Facebook boycott.
Sarah Personette, vice president of global client solutions at Twitter, said the company’s “mission is to serve the public conversation and ensure Twitter is a place where people can make human connections, seek and receive authentic and credible information, and express themselves freely and safely.”
She added that Twitter is “respectful of our partners’ decisions and will continue to work and communicate closely with them during this time.”
Fox News Breaking News Alert
House approves DC statehood bill, GOP calls move Dem 'power grab'
06/26/20 12:12 PM
Thursday, 25 June 2020
Fox News Breaking News Alert
PROGRAMMING ALERT: Sean Hannity's town hall with President Trump, 9 pm ET on Fox News
06/25/20 5:51 PM
New story in Technology from Time: Verizon Pulls Facebook and Instagram Ads Over Hate Speech and Disinformation
(Bloomberg) — Verizon Communications Inc. said it is pausing the placement of ads on Facebook Inc. and Instagram until the social networks can get better control over posts that spread disinformation.
“We have strict content policies in place and have zero tolerance when they are breached, we take action,” Verizon Chief Media Officer John Nitti said in a statement. “We’re pausing our advertising until Facebook can create an acceptable solution that makes us comfortable and is consistent with what we’ve done with YouTube and other partners.”
Verizon is one of the largest advertisers to pull its ads from Facebook as part of an effort by civil rights organizations to pressure the social-media company to take action on hate speech and misleading content. Groups including the Anti-Defamation League and Color of Change started the campaign, called Stop Hate For Profit, to encourage advertisers to boycott Facebook ads in July. Verizon’s move follows participation by Recreational Equipment Inc., Patagonia Inc., Upwork Inc., Ben & Jerry’s and other brands.
“We applaud Verizon for joining this growing fight against hate and bigotry by pausing their advertising on Facebook’s platforms, until they put people and safety over profit,” Jonathan Greenblatt, chief executive officer of ADL, said in a statement. “This is how real change is made.”
Facebook has been telling advertisers that it bases its policies on principles, not business interests, according to its communications with marketers. The Menlo Park, California-based company has been reaching out to advertisers to discuss its recent initiatives on registering voters and distributing verified election information.
But it’s not just advertisers that are upset. U.S. lawmakers have also put pressure on Facebook, Twitter Inc. and Google to combat disinformation, including during a House Intelligence Committee hearing last week.
“We respect any brand’s decision, and remain focused on the important work of removing hate speech and providing critical voting information,” Carolyn Everson, vice president of Facebook’s global business group, said in a statement. “Our conversations with marketers and civil rights organizations are about how, together, we can be a force for good.”
Verizon’s move was reported earlier by Ad Age.
New story in Technology from Time: Apple Ditching Intel Could Mean Big Changes Ahead for Mac Desktops and Laptops
Apple’s first-ever virtual Worldwide Developer Conference (WWDC) came with the usual slew of mostly predictable announcements, like upgrades to the iPhone and iPad operating systems, new features for its AirPod earbuds, and more. But its most striking news was a decision to shift from powering its Mac devices with Intel processors in favor of its own homemade chip, which it’s calling “Apple silicon.”
The Cupertino, Calif. tech giant claims the move will bring myriad benefits: it will make Macs faster, let them benefit from the company’s latest machine learning technology (for features like augmented reality, photo processing, facial recognition and more), and make it easier for developers to bring popular apps from the iPhone and the iPad to desktops and laptops. The transition to Apple silicon will take about two years; more Intel-powered Macs are yet to come.
Apple silicon differs from Intel’s processors by virtue of their architecture, which determines how a computer executes tasks. Apple is using ARM technology, which boasts faster performance with less power use compared to the architecture used by Intel (and its rival AMD). Generally, ARM processors make sense for devices like phones and tablets (because ARM chips use less battery power), while Intel and AMD’s chips have made more sense for high-performance desktops and laptops (where battery usage is less of a concern).
While the move may seem like a major blow to Intel—a longtime processor giant whose “Intel Inside” motto was once ubiquitous in computer stores—the company has already been moving away from making chips for companies like Apple, focusing instead on autonomous vehicle hardware, AI analytics, and high-margin, high-end processors for entertainment and gaming PCs.
“They recognize the challenges that are inherent in the client businesses these days and while they’re not going anywhere, they’re certainly trying to diversify themselves away from that,” says NPD Group analyst Stephen Baker. The Apple news, he says, is “not great, but in the long run I don’t think it’ll have an incredible impact on Intel.”
To be sure, Intel faces some headwinds. It still leads in market share, but it has consistently lost ground in the consumer market month after month to rival AMD (AMD’s share of the desktop market jump from 12% to 18% in the past two years, according to Mercury Research). Intel has also struggled to gain ground in the mobile world—it sold its ARM processor subsidiary in 2005, it killed off a pair of experimental augmented reality glasses in 2018, and last year stopped making 5G smartphone modems in favor of focusing on 5G infrastructure.
Perhaps Intel’s biggest struggle—and a reason it lost favor with Apple—is a never-ending battle with the laws of physics. Processors are composed of billions of transistors that perform calculations by turning on and off. The larger the transistor (measured in nanometers, or “nm”), the more power it uses. By using smaller transistors, you can fit more of them on a processor, which means more computing power, but also more energy efficiency. All told, the size of a processor’s transistors is a good indicator of how powerful that processor will be.
For most of modern computing history, chipmakers like Intel have been able to rely on “Moore’s law”—an observation that the number of transistors you can fit on a single chip doubles about every two years, thanks mostly to technological improvements. But, space being a finite thing, it’s getting harder and harder to cram transistors onto processors. As of today, Intel can make 10 nm processors, but even that achievement came after significant delays that put it behind the curve. By comparison, Taiwan Semiconductor Manufacturing Company, which makes mobile chips designed by Apple, has released smaller, more efficient 7 nm processors for mobile devices.
Intel’s delay in introducing 10 nm processors may have contributed to Apple’s decision to go their own way. “Having their own microprocessor architecture is something they’ve wanted to do since the Jobs era, for sure, to not be beholden to an outside partner,” says Jon Stokes, author of Inside the Machine: An Illustrated Introduction to Microprocessors and Computer Architecture, and co-founder of technology site Ars Technica. “I think the tipping point was when ARM started to catch up to Intel in … performance, and Intel stalled in processor leadership.”
Still, Intel is doing fine when it comes to powering other companies’ laptops, along with its other projects in AI and autonomous vehicle sensors. Its PC-centric business (providing processors for consumers’ desktops and laptops) grew by 14% year-over-year in the first quarter of 2020 as people bought new devices to work from home in the COVID-19 era. But in a sign of the company’s evolution, its best performing sector has been its data center group, which boosted revenue by 23% year-over-year thanks to an increase in cloud services.
What should the everyday Apple user make of the switch from Intel? It may end up being something to celebrate: Apple has a good track record of designing chips; the processor in the iPhone has outperformed Intel-powered laptops at certain tasks. Furthermore, the company is bringing popular third-party developers like Adobe on board early in the process, which should ensure that Apple silicon-powered Macs have plenty of useful software from jump (avoiding a critical misstep Microsoft made when releasing an ARM-powered Surface). That Macs armed with ARM will be instantly compatible with millions of existing iPhone and iPad apps is another nice bonus. And if Apple’s making Macs with iPhone-like internals, it’s not much of a stretch to imagine features like integrated LTE or 5G wireless connectivity, Face ID, and other mobile-only goodies come to its desktops and laptops. All told, Apple betting on itself might be the best decision the company has made since, well, betting on Intel.
Fox News Breaking News Alert
Supreme Court hands Trump administration win on deportation powers
06/25/20 8:07 AM
Wednesday, 24 June 2020
Fox News Breaking News Alert
BET founder mocks crowds tearing down statues, calls them 'borderline anarchists'
06/24/20 2:52 PM
New story in Technology from Time: Social Media Platforms Claim Moderation Will Reduce Harassment, Disinformation and Conspiracies. It Won’t
If the United States wants to protect democracy and public health, it must acknowledge that internet platforms are causing great harm and accept that executives like Mark Zuckerberg are not sincere in their promises to do better. The “solutions” Facebook and others have proposed will not work. They are meant to distract us.
The news in the last weeks highlighted both the good and bad of platforms like Facebook and Twitter. The good: Graphic videos of police brutality from multiple cities transformed public sentiment about race, creating a potential movement for addressing an issue that has plagued the country since its founding. Peaceful protesters leveraged social platforms to get their message across, outcompeting the minority that advocated for violent tactics. The bad: waves of disinformation from politicians, police departments, Fox News, and others denied the reality of police brutality, overstated the role of looters in protests, and warned of busloads of antifa radicals. Only a month ago, critics exposed the role of internet platforms in undermining the country’s response to the COVID-19 pandemic by amplifying health disinformation. That disinformation convinced millions that face masks and social distancing were culture war issues, rather than public health guidance that would enable the economy to reopen safely.
The internet platforms have worked hard to minimize the perception of harm from their business. When faced with a challenge that they cannot deny or deflect, their response is always an apology and a promise to do better. In the case of Facebook, University of North Carolina Scholar Zeynep Tufekci coined the term “Zuckerberg’s 14-year apology tour.” If challenged to offer a roadmap, tech CEOs leverage the opaque nature of their platforms to create the illusion of progress, while minimizing the impact of the proposed solution on business practices. Despite many disclosures of harm, beginning with their role in undermining the integrity of the 2016 election, these platforms continue to be successful at framing the issues in a favorable light.
When pressured to reduce targeted harassment, disinformation, and conspiracy theories, the platforms frame the solution in terms of content moderation, implying there are no other options. Despite several waves of loudly promoted investments in artificial intelligence and human moderators, no platform has been successful at limiting the harm from third party content. When faced with public pressure to remove harmful content, internet platforms refuse to address root causes, which means old problems never go away, even as new ones develop. For example, banning Alex Jones removed conspiracy theories from the major sites, but did nothing to stop the flood of similar content from other people.
The platforms respond to each new public relations challenge with an apology, another promise, and sometimes an increased investment in moderation. They have done it so many times I have lost track. And yet, policy makers and journalists continue to largely let them get away with it.
We need to recognize that internet platforms are experts in human attention. They know how to distract us. They know we will eventually get bored and move on.
Despite copious evidence to the contrary, too many policy makers and journalists behave as if internet platforms will eventually reduce the harm from targeted harassment, disinformation, and conspiracies through content moderation. There are three reasons why it will not do so: scale, latency, and intent. These platforms are huge. In the most recent quarter, Facebook reported that 1.7 billion people use its main platform every day and roughly 2.3 billion across its four large platforms. They do not disclose the numbers of messages posted each day, but it is likely to be in the hundreds of millions, if not a billion or more, just on Facebook. Substantial investments in artificial intelligence and human moderators cannot prevent millions of harmful messages from getting through.
The second hurdle is latency, which describes the time it takes for moderation to identify and remove a harmful message. AI works rapidly, but humans can take minutes or days. This means a large number of messages will circulate for some time before eventually being removed. Harm will occur in that interval. It is tempting to imagine that AI can solve everything, but that is a long way off. AI systems are built on data sets from older systems, and they are not yet capable of interpreting nuanced content like hate speech.
The final – and most important – obstacle for content moderation is intent. The sad truth is that the content we have asked internet platforms to remove is exceptionally valuable and they do not want to remove it. As a result, the rules for AI and human moderators are designed to approve as much content as possible. Alone among the three issues with moderation, intent can only be addressed with regulation.
A permissive approach to content has two huge benefits for platforms: profits and power. The business model of internet platforms like Facebook, Instagram, YouTube, and Twitter is based on advertising, the value of which depends on consumer attention. Where traditional media properties create content for mass audiences, internet platforms optimize content for each user individually, using surveillance to enable exceptionally precise targeting. Advertisers are addicted to the precision and convenience offered by internet platforms. Every year, they shift an ever larger percentage of their spending to them, from which platforms derive massive profits and wealth. Limiting the amplification of targeted harassment, disinformation, and conspiracy theories would lower engagement and revenues.
Power, in the form of political influence, is an essential component of success for the largest internet platforms. They are ubiquitous, which makes them vulnerable to politics. Tight alignment with the powerful ensures success in every country, which leads platforms to support authoritarians, including ones who violate human rights. For example, Facebook has enabled regime-aligned genocide in Myanmar and state-sponsored repression in Cambodia and the Philippines. In the United States, Facebook and other platforms have ignored or altered their terms of service to enable Trump and his allies to use the platform in ways that would normally be prohibited. For example, when journalists exposed Trump campaign ads that violated Facebook’s terms of service with falsehoods, Facebook changed its terms of service, rather than pulling the ads. In addition, Facebook chose not to follow Twitter’s lead in placing a public safety warning on a Trump post that promised violence in the event of looting.
Thanks to their exceptional targeting, platforms play an essential role in campaign fundraising and communications for candidates of both parties. While the dollars are not meaningful to the platforms, they derive power and influence from playing an essential role in electoral politics. This is particularly true for Facebook.
At present, platforms have no liability for the harms caused by their business model. Their algorithms will continue to amplify harmful content until there is an economic incentive to do otherwise. The solution is for Congress to change incentives by implementing an exception to the safe harbor of Section 230 of the Communications Decency Act for algorithm amplification of harmful content and guaranteeing a right to litigate against platforms for this harm. This solution does not impinge on first amendment rights, as platforms are free to continue their existing business practices, except with liability for harms.
Thanks to COVID-19 and the protest marches, consumers and policy makers are far more aware of the role that internet platforms play in amplifying disinformation. For the first time in a generation, there is support in both parties in Congress for revisions to Section 230. There is increasing public support for regulation.
We do not need to accept disinformation as the cost of access to internet platforms. Harmful amplification is the result of business choices that can be changed. It is up to us and to our elected representatives to make that happen. The pandemic and the social justice protests underscore the urgency of doing so.
Fox News Breaking News Alert
3 men indicted on murder charges in killing of Ahmaud Arbery in Georgia, prosecutor announces
06/24/20 12:55 PM
Fox News Breaking News Alert
Senate Democrats block GOP-authored police reform bill
06/24/20 10:07 AM
Monday, 22 June 2020
Fox News Breaking News Alert
Seattle will move to dismantle 'CHOP' zone after shootings, mayor says
06/22/20 5:06 PM
New story in Technology from Time: The 5 Most Exciting Things Apple Announced at its Virtual WWDC 2020
The effects of COVID-19, which necessitate social distancing and nationwide stay-at-home orders, are evident even when it comes to press events. On Monday, Apple hosted its first-ever pre-recorded Worldwide Developers Conference (WWDC). The stream featured some snazzy camera work, and updates to both Apple software and hardware from CEO Tim Cook, SVP Craig Federighi, and various other Apple employees, including a number of women and people of color, the most ever shown in a single keynote presentation from the company.
Aside from the new faces, Apple went on to highlight the newest features coming to its software for its iPhone, iPad, and Apple Watch devices, along with its Apple TV and macOS platforms.
On top of all that, the company announced a long-rumored shift in the hardware that powers its Mac computers—one that echoes a transition it made 15 years ago that set it up for over a decade of success in the desktop and laptop space.
While most of the company’s updates to the operating systems governing its iPhone, iPad, AppleTV, and Mac platforms addressed functional and visual issues long overlooked (and long solved by competitors like Microsoft and Google), its most major hardware change suggested that Apple, as Cook said at the end of the keynote, was still innovating.
Here are the five biggest announcements Apple just made at WWDC 2020:
iOS 14 updates the home screen and organizes your apps
As expected, Apple announced the next version of its iOS operating system for iPhone, iOS 14. The upcoming iOS 14 features a ton of user-friendly improvements and interface tweaks that make popular apps like Messages and Maps more functional, and adds improvements that make information easier to get to without jumping through hoops (or opening apps).
The updated home screen brings the iPhone interface up to snuff with competing Android smartphones. The widgets feature, long found on Android devices, will make it easier for people to glean a little more information at a glance without resorting to opening apps or swiping to another screen to check out their calendar, see different times zones, or take a look at what podcast or song is playing. Widgets are adjustable in size and in terms of the information displayed, and show up on the home screen right next to the rest of your apps.
As for the apps on your home screen—which at this point may span multiple pages if you still download apps regularly—the new iOS 14 will clean up the clutter on your behalf with automatically organized collections. Apps are more cleanly organized by use and category, and features like Siri (and incoming phone calls) are more compact, leaving the majority of your screen visible instead of dominating the entire display.
Apple’s Messages apps has also received a few new tricks to make group messages easier to follow, and talking to the people most important to you more convenient, with threaded messages and pinned contacts. There are even more customization options for Memoji as well, including a face mask accessory.
In terms of completely new additions, Apple’s including a new Translate app that offers close to real-time translations for conversations between two speakers. Apple’s also introducing “Apple Clips”, lightweight versions of popular apps you can quickly use to pay for purchases or use services at participating businesses without downloading larger apps you might not want on your phone.
There are also new features in Apple’s CarPlay automotive interface, including the ability to unlock your car with a compatible iPhone (the feature will be available in new cars next year) and share keys with approved drivers simply by sending them a message.
Now you can sleep with your Apple Watch
Apple WatchOS 7 introduces a few new features, including the long-desired sleep tracking, previously relegated to competing fitness devices or third-party apps like AutoSleep. “We are taking a more holistic approach to sleep by leveraging the devices you use every day to not only track your sleep but to support you in actually meeting you sleep duration goal,” said Vera Carr, Apple’s Health Software Engineering Manager. In WatchOS 7, the native sleep tracking app now takes steps to ensure you get to bed on time and wake up when you want—when it’s time for bed, the watch will dim its display so as to not disturb you, and track your movements and breathing patterns while you’re asleep in order to suss out how long and how deep your sleep is. It will also offer users more options when it comes to wake up alarms.
One of the more interesting and topical additions to WatchOS 7 is “handwashing detection.” It uses the watch’s built-in audio and movement sensors to determine when you’re engaging in a bit of hygiene, then displays a timer on your watch to help you scrub those hands for an appropriate amount of time without cutting any corners. The watch will notify you if you bow out early, and congratulate you when you finish the job.
Other features include the ability to share watch faces online or with friends, and download apps on said watch faces that you may not have installed on your watch or phone. Apple also added workout tracking for dancing, and can detect up to four different styles: hip-hop, latin, cardio, and bollywood.
iPad is making the Pencil essential
The iPad, with iPadOS, is receiving a great deal of the new features found on iOS 14, along with some additions unique to the tablet. In addition to visual updates to its most important apps like Photos and Notes, there are upgrades coming to the way the iPad works with the Apple Pencil, the company’s pressure-sensitive stylus.
Apple’s been a pioneer when it comes to handwriting recognition, dating back to the introduction of the Apple Newton in the early 90’s, which was able to recognize text drawn using the included stylus. Now that handwriting recognition is back, built into the upcoming version of iPadOS. With the new handwriting recognition, dubbed “Scribble,” users can treat written text as though it were typed—they can select it, move it, manipulate it, and even convert it to typed text. Users can write queries in text fields and web browsers and see them converted into typed text for searches as well.
macOS gets a refreshing coat of paint and app updates
The Mac has long been regarded as the most neglected platform in Apple’s arsenal. Cautious optimism is returning, thanks to the recent refresh of Apple’s Mac Pro and Mac mini desktop computers, as well as its moves to encourage developer adoption of its “Catalyst” programming tools (used to adapt mobile apps for the Mac). With the announcement of Apple Silicon (more on that below), which can run iOS and iPad apps without much work on behalf of developers, it’s clear to see why.
In the upcoming macOS “Big Sur,” Apple’s made a slew of visual and interface changes meant to simplify the experience of using Apple’s own apps. Big Sur also brings elements found in iOS and iPadOS, like Control Center for easy access to settings like sound and brightness, and new widgets like those found in iOS 14.
Goodbye Intel, Hello Apple Silicon
The last big announcement at WWDC 2020 stole the show: Apple’s switching from Intel processors to in-house “Apple Silicon” processors. The switch is reminiscent of Apple’s 2005 switch from PowerPC to Intel, and it stands to fundamentally change the way Macs work.
The move came after Apple has spent a decade creating mobile processors for its iPhone and iPad devices; its latest chips boasted performance gains rivaling those of Intel-powered laptops. “With its powerful features and industry-leading performance, Apple silicon will make the Mac stronger and more capable than ever,” said CEO Tim Cook. “I’ve never been more excited about the future of the Mac.”
In addition, Cook presented a two-year road map meant to ease developers’ transition from building Intel-based apps to those running on Apple Silicon chips. And as part of a $500 Developer Transition Kit, programmers will be loaned a Mac Mini powered by the company’s upcoming A12Z Bionic processor.
Cook said Apple would continue to create Intel-powered Macs in the future, though did not specify when the Apple Silicon-powered devices would ultimately replace the Intel machines.
Apple also announced new features for its AppleTV platform, and new spatial audio features coming to AirPods Pro, seeking to simulate the effects of a surround sound system without the extra speakers.
The updated versions of Apple’s operating systems will be made available to consumers in the fall, with a public beta for the software updates scheduled to be released in July.
Fox News Breaking News Alert
Trump to sign order expanding immigration restrictions to include H-1B, other guest worker programs
06/22/20 12:56 PM
Sunday, 21 June 2020
Fox News Breaking News Alert
UK stabbing attack that left 3 dead being treated as terror incident, police say
06/21/20 3:44 AM
Saturday, 20 June 2020
Fox News Breaking News Alert
WATCH LIVE: President Trump holds rally in Tulsa; full coverage on Fox News Channel and on FoxNews.com
06/20/20 5:12 PM
Fox News Breaking News Alert
AG Barr says President Trump has fired federal prosecutor in Manhattan who refused to step down
06/20/20 12:56 PM
Friday, 19 June 2020
Fox News Breaking News Alert
Fox News Poll: Voters say yes to face masks, no to rallies
06/19/20 3:05 PM
Fox News Breaking News Alert
Oklahoma judge allows Trump rally to proceed on Saturday, as president says Tulsa curfew being lifted
06/19/20 1:14 PM
Fox News Breaking News Alert
Police officer involved in Breonna Taylor shooting to be fired, mayor says
06/19/20 9:37 AM
Thursday, 18 June 2020
Fox News Breaking News Alert
Fox News Poll: Biden widens lead over Trump; Republicans enthusiastic, but fear motivates Dems
06/18/20 3:06 PM
New story in Technology from Time: Facebook Removes Trump Campaign Ads Featuring Symbol Once Used by Nazis
(WASHINGTON) — Facebook has removed a campaign ad by President Donald Trump and Vice President Mike Pence that featured an upside-down red triangle, a symbol once used by Nazis to designate political prisoners, communists and others in concentration camps.
The company said in a statement Thursday that the ads violated “our policy against organized hate.” A Facebook executive who testified at a House Intelligence Committee hearing on Thursday said the company does not permit symbols of hateful ideology “unless they’re put up with context or condemnation.”
“In a situation where we don’t see either of those, we don’t allow it on the platform and we remove it. That’s what we saw in this case with this ad, and anywhere that that symbol is used, we would take the same action,” said Nathaniel Gleicher, the company’s head of security policy.
The Trump campaign spent more than $10,000 on the ads, which began running on Wednesday and targeted men and women of all ages across the U.S., though primarily in Texas, California and Florida.
In a statement, Trump campaign communications director Tim Murtaugh said the inverted red triangle was a symbol used by antifa so it was included in an ad about antifa. He said the symbol is not in the Anti-Defamation League’s database of symbols of hate. The Trump campaign also argued that the symbol is an emoji.
“But it is ironic that it took a Trump ad to force the media to implicitly concede that Antifa is a hate group,” he added.
Antifa is an umbrella term for leftist militants bound more by belief than organizational structure. Trump has blamed antifa for the violence that erupted during some of the recent protests, but federal law enforcement officials have offered little evidence of this.
The ADL disputed that the red triangle was commonly used as an antifa symbol. The organization said the triangle was not in its database because it is a historical symbol and the database includes only those symbols used by modern-day extremists and white supremacists.
“Whether aware of the history or meaning, for the Trump campaign to use a symbol — one which is practically identical to that used by the Nazi regime to classify political prisoners in concentration camps — to attack his opponents is offensive and deeply troubling,” ADL chief executive officer Jonathan Greenblatt said in a statement.
The action comes as Facebook and other technology companies face persistent criticism, particularly from Democrats, about whether they are doing enough to police the spread of disinformation and tweets and posts from Trump perceived as inflammatory.
Those questions arose during Thursday’s hearing when a Twitter representative was asked why the company flagged but did not remove tweets from the president, including one that raised the prospect of shooting looters during the recent unrest in American cities. Facebook, too, was asked why it did not remove a doctored video of House Speaker Nancy Pelosi, D-Calif., last year that appeared to show her slurring her words.
“If we simply take a piece of content like this down, it doesn’t go away,” Gleicher said. “It will exist elsewhere on the internet. People who are looking for it will still find it.”
With Thursday’s hearing focused on the spread of disinformation tied to the 2020 election, the companies said they had not yet seen the same sort of concerted foreign influence campaigns like the one four years ago, when a Russian troll farm sowed discord online by playing up divisive social issues.
But that suggests the threat has evolved rather than diminished, said the executives, who pointed out that media companies controlled by the state were directly and openly engaging online on American social issues to affect public opinion. China, for instance, has likened allegations of police brutality in the U.S. to the criticism it faced for its aggressive treatment of protesters in Hong Kong last year.
Preventing disinformation ahead of the election is a significant challenge in a country facing potentially dramatic changes in how people vote, with expected widespread use of mail-in ballots creating openings to cast doubt on the results and even spread false information.
Facebook said Thursday that it is working to help Americans vote by mail, including by notifying users about how to request ballots and whether the date of their state’s election has changed.
The Vote By Mail notification connects Facebook users to information about how to request a ballot. It is targeted to voters in states where no excuse is needed to vote by mail or where fears of the coronavirus are accepted as a universal excuse.
“Providing that accurate information is one of the best ways to mitigate those kinds of threats,” Gleicher said.
New story in Technology from Time: ‘We’ll Do All We Can to Promote Free Speech,’ Says Zoom CEO Eric Yuan After Criticism on Encryption and Privacy
Zoom CEO Eric Yuan said he wants everyone to feel safe and protected when using the videoconferencing service that has risen in popularity much of the workforce works from home during the pandemic. “We want to make sure every user is happy,” said Yuan on Thursday’s Time100 Talks.
The sentiment comes after much discussion about the company’s controversial stance around both encryption and its compliance with local government’s censorship laws, a hot-button issue that’s only garnered more attention amid rising protests around the world and a growing fear of government surveillance of protesters and activists, a fear confirmed by Zoom’s own actions taken against some users.
Mr. Yuan announced in a blog post the company’s plans to include end-to-end encryption support for both free and paid accounts, backpedaling from the company’s announcement two weeks ago explaining why calls made by free users would not be encrypted. “Our initial thought was to only give [end-to-end encryption] to paid users because we cannot identify who those free users are,” said Yuan, despite his earlier statements suggesting the reason was primarily to comply with the FBI and local law enforcement. “But based on feedback we figured out a way to identify those free users, meaning you can use your phone number for SMS verification… We are very committed to listening to our customers’ feedback and making changes to deliver happiness to our users.”
Still, the company’s actions run counter to Yuan’s statements. Recently, Zoom was criticized for suspending the accounts of three human rights activists hosting Zoom calls discussing Beijing’s 1989 Tianamen Square incident, where the Chinese government opened fire on demonstrators. “To be clear, their accounts have been reinstated, and going forward, we will have a new process for handling similar situations,” the company said in a blog post addressing the incident.
“On the one hand, we needed to make sure we’re in compliance with local laws,” said Yuan when asked about the suspension. “On the other hand, you are so right. We are an American company, we truly promote free speech. And now, we’re doing all we can to see if there’s a conflict. We want to make sure that we stick to our values rather than our revenue opportunities.”
In a conversation with TIME’s Haley Edwards, Yuan also discussed the company’s changes when it comes to its internal structure, changes that coincide with the rapid global cultural shift thanks to Black Lives Matter protests in both the U.S. and around the world. Its most major change is the hiring of its first Chief Diversity Officer, Damien Hooper-Campbell. Hooper-Campbell, was previously Chief Diversity Officer for eBay, as well as Global Head of Diversity & Inclusion for Uber.
“I think every high tech company should take that very seriously,” said Yuan. “We should be on the forefront on this focus on justice, and do all we can to care about our community, to care about our society, and to make our communities a much better place.”
The community of Zoom users has grown at a breakneck speed, from 10 million users in December to over 300 million as of April 2020, largely due to the COVID-19 pandemic, forcing non-essential workers to remain at home and conduct more and more of their work online via videoconferencing. While the U.S. has begun inching toward the eventual lifting stay at home orders around the country, it could be quite some time before people are able to resume their normal schedules, including a commute to the office. It also means more video calls are in everyone’s future, a factor that could lead to what many are calling Zoom burnout.
Yuan, aware of the growing concern of videoconferencing fatigue, said the company has partnered with the American Heart Association, and is funding a monthly “Happy Half-Hour Mental Wellness Webinar” series to the tune of half a million dollars.
As for Zoom’s future, Yuan is confident the future of the service will make in-person meetings of today feel antiquated. His more futurist-inspired examples include features like feedback when it comes to physical interactions like handshakes, and even the possibility of scent detection, all within the next 50 years. “I think those features will truly enrich the videoconferencing space,” says Yuan. “We’re not there yet, but the future is bright.”
Fox News Breaking News Alert
Supreme Court rules against Trump administration bid to end DACA program
06/18/20 7:13 AM
Fox News Breaking News Alert
Jean Kennedy Smith, last surviving sibling of JFK, dead at 92
06/18/20 5:44 AM
Fox News Breaking News Alert
Vera Lynn, WWII allied forces singer, dead at 103
06/18/20 3:00 AM
Wednesday, 17 June 2020
Fox News Breaking News Alert
PROGRAMMING ALERT: President Trump live on 'Hannity,' coming up on Fox News
06/17/20 6:13 PM
New story in Technology from Time: Going to a Protest? Here’s How to Protect Your Digital Privacy
George Floyd’s killing at the hands of Minneapolis police—the latest in a grim drumbeat of similar deaths over many years—has sparked worldwide protests denouncing racism and law enforcement’s abuse of power.
Floyd’s May 25 death may have gone largely unnoticed had it not been recorded by multiple smartphone-wielding bystanders, whose footage forced America to finally reckon with the dangers Black Americans unjustly face simply when leaving their house. And as police departments across the United States have responded to largely peaceful protests with disproportionate violence, smartphone footage has provided crucial evidence refuting official descriptions of harrowing events.
Even as protesters turn to their smartphones as a means to record their experiences on the ground, those same devices can be used against them. Law enforcement groups have digital surveillance tools, like fake cell phone towers and facial recognition technology, that can be used to identify protestors and monitor their movements and communications. Furthermore, investigators and prosecutors have come to view suspects’ phones as potential treasure troves of information about them and their associates, setting up legal battles over personal technology and Americans’ Constitutional rights. And while protesters are within their rights to take pictures and video at protests, the images they capture could lead to unintended consequences for vulnerable participants.
What should peaceful protesters know about their digital privacy before heading to a demonstration? Here are some things they should keep in mind.
Keeping protest prep private
Protestors should be mindful of the data they generate before headed to a demonstration—police can issue warrants for a person’s search history, chat logs and social media posts, experts say. (Social media companies, including Facebook, have dedicated pages for police “to gather evidence in connection with an official investigation,” giving the police an easy way to access information.)
Cooper Quintin, senior staff technologist at the pro-privacy Electronic Frontier Foundation, recommends that protestors make their social media accounts private as a first step. “[Police will] go through social media looking for pictures from people who are tied to the protest, or groups that are organizing people to attend a protest,” Quintin says. He adds that demonstrators would also be smart to be skeptical about who’s trying to contact them online. “The police could try to add you as a friend, and they may be successful, so you’ll need to be vigilant there as well,” he says.
Using a service called a Virtual Private Network could help organizers obfuscate their Internet traffic. Finding a trustworthy VPN is a chore in and of itself, but resources like That One Privacy Site offer helpful comparison charts. Alternatively, they can use tools like the Tor browser, which masks a user’s online activity by blocking trackers and encrypting their network traffic multiple times. (No tool can offer 100% privacy, of course.)
Meanwhile, any protest-related organizing should be conducted over end-to-end encrypted apps rather than text messages (otherwise known as SMS), says Daniel Kahn Gillmor, senior technologist at the American Civil Liberties Union. Signal is one favorite that works across platforms, meaning iPhone users can chat with Android owners and vice-versa. “The content of [SMS messages] is not protected, and the destination of those messages is not protected,” he says.
Smartphones optional
It can be difficult to say with certainty what kind of surveillance technology any given law enforcement group is using during a particular event. But one popular system, the Stingray, is essentially a fake cell phone tower that tricks phones into connecting to it, then collects data from connected devices.
With that kind of tech in mind, it may be wise for demonstrators to simply leave their phone at home before heading to a protest—if an event is especially crowded, they might have trouble getting a good signal anyway. If they go sans-phone, it’s smart to pick a spot in advance to meet with friends if they get separated, bring a paper map of the protest area, and keep any essential contact information on them at all times.
But if a demonstrator is bringing their phone, they can certainly take measures to secure their data—turning it off or activating airplane mode could help, for instance. It couldn’t hurt to turn off location data, too. To do so on an iPhone, visit the Settings app, Privacy, then Location Services to selectively or completely disable location services for all of your apps. Android user? Visit the Settings app, select Privacy, then Permissions Manager to see which apps have access to what parts of your data, including location, call logs, and contacts. From there you can disable or enable whichever apps you decide. (Keep in mind this won’t keep you completely off the grid, as companies like Google have said they collect location data even when users disable permissions, with the explanation that the company requires location data to provide users with its services.)
What about a burner phone?
Those who are really worried about having their phone tracked could get a “burner” phone, a prepaid device paid for in cash and used for the express purpose of staying in touch with people during a peaceful protest. Burner phones can give users the benefit of being able to stay in touch with people—especially if things get dicey—without exposing all the data on their everyday device.
“If something happens to your phone, you don’t want to lose all of your stuff. You don’t want all of your life’s data to be in someone else’s hands,” says Harlo Holmes, director of newsroom digital security at the Freedom of the Press Foundation. “It’s really about protecting your data when you’re bringing it into what could possibly be an unknown situation.”
Another option is putting a phone in what’s called a “faraday pouch,” cases designed to block incoming and outgoing signals, effectively sealing off the device from the outside world. But some experts dispute whether they’re useful or necessary. “I think it’s just easier to put your phone in airplane mode, it achieves the same objective,” says the EFF’s Quintin.
Be mindful about photos and videos
Police departments nationwide are embracing facial recognition technology from companies like NEC and Clearview AI, some of which they can use to identify protesters following demonstrations based on photos or video from the event. While you may have no qualms about the police knowing you were at a protest, that calculus may be different for other participants who have less legal protection in the U.S., like undocumented immigrants.
Furthermore, in the context of law enforcement, facial recognition technology is still a new and unproven technology, sometimes resulting in false identifications and arrests, particularly when misused. The New York Police Department, for instance, was criticized after allegedly using a photo of the actor Woody Harrelson to “match” with a suspect in 2017. (The NYPD declined to comment on the use of Harrelson’s photo in particular, but spokesperson Sgt. Jessica McRorie said this case was one of “more than 5,300 requests” to the department’s Facial Identification Section that year. “The NYPD uses facial recognition as a limited investigative tool, comparing a still image from a surveillance video, to a pool of lawfully possessed arrest photos,” said McRorie.)
Still, due to inherent biases in facial recognition software, facial recognition technology has proven especially bad at correctly identifying Black people compared to those of other races, a fault that advocates say puts Black Americans at greater risk of false identification and arrest.
“We are in the midst of an uprising of historic magnitude, with hundreds of thousands of people already participating and potentially millions taking part in the days and weeks to come,” wrote Joy Buolamwini, founder of the Algorithmic Justice League, which fights against biases in artificial intelligence, in a recent essay. “At these scales, even small error rates can result in large numbers of people mistakenly flagged and targeted for arrest.”
It’s also wise to be cognizant of the information stored within an image itself, known as “metadata.” That includes information like the time and date a picture was taken, the type of phone used to record the image, and, depending on a user’s settings, even location data showing where the photo was taken. All that data can be used to identify a photographer or the people in their photos. (One trick to redact a photo’s metadata: obscure faces to foil facial recognition software, take a screenshot of the photo, then share the screenshot instead of the original photo.)
Of course, it’s important to document police brutality, if that can be done safely. Such footage can provide crucial evidence that violent episodes in fact occurred. “The best way to go about it is keep it as simple as possible, and to be conscious of the people that you’re around,” says Holmes.
Finally, wearing a mask can protect demonstrators from both facial recognition and COVID-19. “It’s important because you don’t want to spread viruses or catch the viruses that are out there,” says Quintin. “It’s also important because facial recognition is becoming more and more a regular tool used by law enforcement.” He suggests donning polarized glasses or goggles, covering any unique tattoos, and toning down eye-catching hairstyles.
What if you’re arrested?
If a protestor is detained or arrested by police during a peaceful protest, they may ask for their phone, or order them to delete footage. They can decline to consent to a search of their smartphone for information, but things get legally dicey from there. While some U.S. judges have ruled that obtaining a warrant to compel someone to unlock their phone with biometrics like a fingerprint is constitutional, others have deemed it a violation of the Fourth Amendment’s prohibition on unreasonable searches and/or the Fifth Amendment’s protections against self-incrimination.
In 2016, a California federal judge ruled that compelling a suspect to unlock their smartphone with their fingerprint did not violate the Fifth Amendment, given that a fingerprint is a physical attribute that cannot be divulged at will (like a passcode in a person’s memory). But in 2019, a federal judge in an unrelated case involving the search of various devices ruled that forcing the use of biometrics to unlock electronic devices is a violation of both the Fourth and Fifth Amendments.
Still, the legal uncertainty over the practice is reason enough for demonstrators to consider leaving their phone at home, or picking up a burner, before heading out to protest.
New story in Technology from Time: Facebook Removes Another 900 Accounts Linked to White Supremacy Groups
Facebook has removed another 900 social media accounts linked to white supremacy groups after members discussed plans to bring weapons to protests over police killings of black people.
The accounts on Facebook and Instagram were tied to the Proud Boys and the American Guard, two hate groups already banned on those platforms.
The company announced Tuesday that it recently took down 470 accounts belonging to people affiliated with the Proud Boys and another 430 linked to members of the American Guard.
Nearly 200 other accounts linked to the groups were removed late last month.
Facebook officials have said they were already monitoring the groups’ social media presence and were led to act when they spotted posts attempting to exploit the ongoing protests prompted by the death of George Floyd in Minneapolis.
Some of the accounts belonged to men reported to have participated in a brawl with protesters in Seattle, Facebook said. The company did not divulge details of the account users — such as their specific plans for protests or where in the U.S. they live.
“In both cases, we saw accounts from both organizations discussing attending protests in various US states with plans to carry weapons,” the company said in a statement. “But we did not find indications in their on-platform content they planned to actively commit violence.”
Both the Proud Boys and American Guard had been banned from Facebook for violating rules prohibiting hate speech. Facebook said it will continue to remove new pages, groups or accounts created by users trying to circumvent the ban.
Fox News Breaking News Alert
Former Atlanta police officer faces charges including felony murder in killing of Rayshard Brooks
06/17/20 12:58 PM
Fox News Breaking News Alert
Bolton, in book, accuses Trump of 'obstruction of justice as a way of life,' asking China for 2020 help
06/17/20 12:40 PM
New story in Technology from Time: Facebook Will Allow Users to Turn Off Political Ads But Warns Its Systems ‘Aren’t Perfect’
Facebook is launching a widespread effort to boost U.S. voter turnout and provide authoritative information about voting — just as it doubles down on its policy allowing politicians like President Donald Trump to post false information on the same subject.
The social media giant is launching a “Voting Information Center” on Facebook and Instagram that will include details on registering to vote, polling places and voting by mail. It will draw the information from state election officials and local election authorities.
The information hub, which will be prominently displayed on Facebook news feeds and on Instagram later in the summer — is similar to the coronavirus information center the company launched earlier this year in an attempt to elevate facts and authoritative sources of information on COVID-19.
Facebook and its CEO, Mark Zuckerberg, continue to face criticism for not removing or labeling posts by Trump that that spread misinformation about voting by mail and, many said, encouraged violence against protesters.
“I know many people are upset that we’ve left the President’s posts up, but our position is that we should enable as much expression as possible unless it will cause imminent risk of specific harms or dangers spelled out in clear policies,” Zuckerberg wrote earlier this month.
In a USA Today opinion piece Tuesday, Zuckerberg reaffirmed that position.
“Ultimately, I believe the best way to hold politicians accountable is through voting, and I believe we should trust voters to make judgments for themselves,” he wrote. “That’s why I think we should maintain as open a platform as possible, accompanied by ambitious efforts to boost voter participation.”
Facebook’s free speech stance may have more to do with not wanting to alienate Trump and his supporters while keeping its business options open, critics suggest.
Dipayan Ghosh, co-director of the Platform Accountability Project at Harvard Kennedy School, said Facebook “doesn’t want to tick off a whole swath of people who really believe the president and appreciate” his words.
In addition to the voting hub, Facebook will also now let people turn off political and social issue ads that display the “paid for by” designation, meaning a politician or political entity paid for it. The company announced this option in January but it is going into effect now.
Sarah Schiff, product manager who works on ads, cautioned that Facebook’s systems “aren’t perfect” and said she encourages users to report “paid for by” ads they see if they have chosen not to see them.
New story in Technology from Time: The U.S. Is Catching Up With China in Technology Adoption, AI Pioneer Kai-Fu Lee Says
The U.S. has started to catch up to China on the adoption of Artificial Intelligence technology, says AI expert Kai-Fu Lee.
When Lee—the chairman and CEO of Sinovation Ventures—wrote his book AI Superpowers in 2018, he argued that China was faster in implementing and monetizing AI technology. But the U.S. has started to close the gap on adopting and using AI day-to-day Lee said at Wednesday’s TIME100 Talks event.
“China was way ahead in things like mobile payments, food delivery, robotics for delivery, things like that, but we also saw recently, in the U.S., very quickly peoples’ habits were forming about ordering food from home, about use of robotics in various places, in using more mobile technologies, mobile payments,” said Lee, who has been at the forefront of AI innovation for over three decades at Apple, Microsoft, Google and today as an investor in Chinese tech startups.
The Chinese Communist Party has placed a huge focus in recent years on technological advancement to drive its economic growth. President Xi Jinping’s Made in China 2025 plan aims to ensure that China dominates AI as well and several other high-tech industries, and in 2018 China’s State Council issued the Next Generation Artificial Intelligence Development Plan to establish China as the “premier global AI innovation center” by 2030.
Lee said Chinese AI research also advanced, with China now producing almost the same percentage of top papers as the U.S.
“I think both the U.S. and China have made up for their weaknesses and are now charging forward,” Lee says, “I think that’s a tremendous benefit to both Chinese and American consumers.”
In response to a question by TIME International Editor Dan Stewart on how the AI community is dealing with racial biases that might be embedded in the data that drives AI, Lee said it’s important for engineers to be trained to ensure their products don’t have bias, and for tools to be built to alert developers if data sets are biased or imbalanced.
Wednesday’s event also featured Bollywood star Ayushmann Khurrana, tennis star Naomi Osaka, former U.N. Secretary General Ban Ki-moon and a performance by K-pop group Monsta X.
Lee also said that the coronavirus pandemic is likely to speed up the adoption of AI because the data that’s feeding AI is increasing.
“We are working from home. We are online. We are contributing to the process of digitization. So a lot of the people who used to go to work and still not fully digitized—that is meetings, taking notes on paper—now it is becoming all digitized and all that content becomes training data for AI,” he says.
Lee said he expects the “next big frontier” for AI to be healthcare. “A lot of money is being pumped into AI healthcare.”
He says that as a result of the virus, many jobs that require contact with lots of people—like healthcare workers and restaurant servers—are likely to be replaced.
“We’re seeing a very rapid replacement of those jobs by robots, currently for our safety,” Lee says. “When I was quarantined in Beijing when I returned to Beijing, all my food and e-commerce and packages were delivered by robot to my doorstep.”
That, says Lee, is not something that will go away.
“After the pandemic, we won’t go back to the way it was,” he says.
This article is part of #TIME100Talks: Finding Hope, a special series featuring leaders across different fields encouraging action toward a better world. Want more? Sign up for access to more virtual events, including live conversations with influential newsmakers.
Tuesday, 16 June 2020
New story in Technology from Time: Facebook Aims to Help Voters, But Won’t Block Misinformation From Politicians
Facebook is launching a widespread effort to boost U.S. voter turnout and provide authoritative information about voting — just as it doubles down on its policy allowing politicians like President Donald Trump to post false information on the same subject.
The social media giant is launching a “Voting Information Center” on Facebook and Instagram that will include details on registering to vote, polling places and voting by mail. It will draw the information from state election officials and local election authorities.
The information hub, which will be prominently displayed on people’s Facebook news feeds beginning on Wednesday — and on Instagram later in the summer — is similar to the coronavirus information center the company launched earlier this year in an attempt to elevate facts and authoritative sources of information on COVID-19.
Facebook and its CEO, Mark Zuckerberg, continue to face criticism for not removing or labeling posts by Trump that that spread misinformation about voting by mail and, many said, encouraged violence against protesters.
“I know many people are upset that we’ve left the President’s posts up, but our position is that we should enable as much expression as possible unless it will cause imminent risk of specific harms or dangers spelled out in clear policies,” Zuckerberg wrote earlier this month.
In a USA Today opinion piece Tuesday, Zuckerberg reaffirmed that position.
“Ultimately, I believe the best way to hold politicians accountable is through voting, and I believe we should trust voters to make judgments for themselves,” he wrote. “That’s why I think we should maintain as open a platform as possible, accompanied by ambitious efforts to boost voter participation.”
Facebook’s free speech stance may have more to do with not wanting to alienate Trump and his supporters while keeping its business options open, critics suggest.
Dipayan Ghosh, co-director of the Platform Accountability Project at Harvard Kennedy School, said Facebook “doesn’t want to tick off a whole swath of people who really believe the president and appreciate” his words.
In addition to the voting hub, Facebook will also now let people turn off political and social issue ads that display the “paid for by” designation, meaning a politician or political entity paid for it. The company announced this option in January but it is going into effect now.
Sarah Schiff, product manager who works on ads, cautioned that Facebook’s systems “aren’t perfect” and said she encourages users to report “paid for by” ads they see if they have chosen not to see them.