Skip to main contentdfsdf

Amped Status's List: Media & Technology

  • Feb 08, 20

    "Lawmakers are calling on the Justice Department to launch a "full-fledged investigation" into China Daily after the Washington Free Beacon found that the propaganda outlet ignored federal law.

    China Daily has spent millions to publish state-sanctioned propaganda in top American newspapers without complying with disclosure requirements for foreign agents, prompting Rep. Jim Banks (R., Ind.), Sen. Tom Cotton (R., Ark.), and 33 other members of Congress to demand a probe into the outlet's activities. On Thursday, the group sent a letter to Attorney General William Barr asking that the Justice Department "promptly review and produce a report on China Daily’s compliance" with federal disclosure laws.

    "Propaganda that seeks to obfuscate communist atrocities deserves to be counteracted," the letter says. "[The Foreign Agents Registration Act] already arms the federal government with tools to fight pernicious foreign influence. The DOJ should use them to clamp down on Chinese propaganda.""

  • Jan 31, 20

    "Ever suspect the Facebook app is listening to you? What we now know is even creepier.

    Facebook is giving us a new way to glimpse just how much it knows about us: On Tuesday, the social network made a long-delayed "Off-Facebook Activity" tracker available to its 2 billion members.

    It shows Facebook and sister apps Instagram and Messenger don't need a microphone to target you with those eerily specific ads and posts - they're all up in your business countless other ways.

    You can see how Facebook is stalking you, too. The "Off-Facebook Activity" tracker will show you 180 days' worth of the data Facebook collects about you from the many organisations and advertisers in cahoots with it."

  • Jan 21, 20

    The Secretive Company That Might End Privacy as We Know It
    A little-known start-up helps law enforcement match photos of unknown people to their online images — and “might lead to a dystopian future or something,” a backer says.

    Until recently, Hoan Ton-That’s greatest hits included an obscure iPhone game and an app that let people put Donald Trump’s distinctive yellow hair on their own photos.

    Then Mr. Ton-That — an Australian techie and onetime model — did something momentous: He invented a tool that could end your ability to walk down the street anonymously, and provided it to hundreds of law enforcement agencies, ranging from local cops in Florida to the F.B.I. and the Department of Homeland Security.

    His tiny company, Clearview AI, devised a groundbreaking facial recognition app. You take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared. The system — whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites — goes far beyond anything ever constructed by the United States government or Silicon Valley giants.

    Federal and state law enforcement officers said that while they had only limited knowledge of how Clearview works and who is behind it, they had used its app to help solve shoplifting, identity theft, credit card fraud, murder and child sexual exploitation cases.

    Until now, technology that readily identifies everyone based on his or her face has been taboo because of its radical erosion of privacy. Tech companies capable of releasing such a tool have refrained from doing so; in 2011, Google’s chairman at the time said it was the one technology the company had held back because it could be used “in a very bad way.” Some large cities, including San Francisco, have barred police from using facial recognition technology.

    But without public scrutiny, more than 600 law enforcement agencies have started using Clearview in the past year, according to the company, which declined to provide a list. The computer code underlying its app, analyzed by The New York Times, includes programming language to pair it with augmented-reality glasses; users would potentially be able to identify every person they saw. The tool could identify activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did and whom they knew.

    Unlock more free articles.
    Create an account or log in
    And it’s not just law enforcement: Clearview has also licensed the app to at least a handful of companies for security purposes.

    “The weaponization possibilities of this are endless,” said Eric Goldman, co-director of the High Tech Law Institute at Santa Clara University. “Imagine a rogue law enforcement officer who wants to stalk potential romantic partners, or a foreign government using this to dig up secrets about people to blackmail them or throw them in jail.”

    Clearview has shrouded itself in secrecy, avoiding debate about its boundary-pushing technology. When I began looking into the company in November, its website was a bare page showing a nonexistent Manhattan address as its place of business. The company’s one employee listed on LinkedIn, a sales manager named “John Good,” turned out to be Mr. Ton-That, using a fake name. For a month, people affiliated with the company would not return my emails or phone calls.

    While the company was dodging me, it was also monitoring me. At my request, a number of police officers had run my photo through the Clearview app. They soon received phone calls from company representatives asking if they were talking to the media — a sign that Clearview has the ability and, in this case, the appetite to monitor whom law enforcement is searching for.

    Facial recognition technology has always been controversial. It makes people nervous about Big Brother. It has a tendency to deliver false matches for certain groups, like people of color. And some facial recognition products used by the police — including Clearview’s — haven’t been vetted by independent experts.

    Clearview’s app carries extra risks because law enforcement agencies are uploading sensitive photos to the servers of a company whose ability to protect its data is untested.

    The company eventually started answering my questions, saying that its earlier silence was typical of an early-stage start-up in stealth mode. Mr. Ton-That acknowledged designing a prototype for use with augmented-reality glasses but said the company had no plans to release it. And he said my photo had rung alarm bells because the app “flags possible anomalous search behavior” in order to prevent users from conducting what it deemed “inappropriate searches.”

    In addition to Mr. Ton-That, Clearview was founded by Richard Schwartz — who was an aide to Rudolph W. Giuliani when he was mayor of New York — and backed financially by Peter Thiel, a venture capitalist behind Facebook and Palantir.

    Another early investor is a small firm called Kirenaga Partners. Its founder, David Scalzo, dismissed concerns about Clearview making the internet searchable by face, saying it’s a valuable crime-solving tool.

    “I’ve come to the conclusion that because information constantly increases, there’s never going to be privacy,” Mr. Scalzo said. “Laws have to determine what’s legal, but you can’t ban technology. Sure, that might lead to a dystopian future or something, but you can’t ban it.”


    Image
    Hoan Ton-That, founder of Clearview AI, whose app matches faces to images it collects from across the internet.Credit...Amr Alfiky for The New York Times
    Addicted to A.I.
    Mr. Ton-That, 31, grew up a long way from Silicon Valley. In his native Australia, he was raised on tales of his royal ancestors in Vietnam. In 2007, he dropped out of college and moved to San Francisco. The iPhone had just arrived, and his goal was to get in early on what he expected would be a vibrant market for social media apps. But his early ventures never gained real traction.

    In 2009, Mr. Ton-That created a site that let people share links to videos with all the contacts in their instant messengers. Mr. Ton-That shut it down after it was branded a “phishing scam.” In 2015, he spun up Trump Hair, which added Mr. Trump’s distinctive coif to people in a photo, and a photo-sharing program. Both fizzled.

    Dispirited, Mr. Ton-That moved to New York in 2016. Tall and slender, with long black hair, he considered a modeling career, he said, but after one shoot he returned to trying to figure out the next big thing in tech. He started reading academic papers on artificial intelligence, image recognition and machine learning.

    Mr. Schwartz and Mr. Ton-That met in 2016 at a book event at the Manhattan Institute, a conservative think tank. Mr. Schwartz, now 61, had amassed an impressive Rolodex working for Mr. Giuliani in the 1990s and serving as the editorial page editor of The New York Daily News in the early 2000s. The two soon decided to go into the facial recognition business together: Mr. Ton-That would build the app, and Mr. Schwartz would use his contacts to drum up commercial interest.

    Police departments have had access to facial recognition tools for almost 20 years, but they have historically been limited to searching government-provided images, such as mug shots and driver’s license photos. In recent years, facial recognition algorithms have improved in accuracy, and companies like Amazon offer products that can create a facial recognition program for any database of images.

    Mr. Ton-That wanted to go way beyond that. He began in 2016 by recruiting a couple of engineers. One helped design a program that can automatically collect images of people’s faces from across the internet, such as employment sites, news sites, educational sites, and social networks including Facebook, YouTube, Twitter, Instagram and even Venmo. Representatives of those companies said their policies prohibit such scraping, and Twitter said it explicitly banned use of its data for facial recognition.

    Another engineer was hired to perfect a facial recognition algorithm that was derived from academic papers. The result: a system that uses what Mr. Ton-That described as a “state-of-the-art neural net” to convert all the images into mathematical formulas, or vectors, based on facial geometry — like how far apart a person’s eyes are. Clearview created a vast directory that clustered all the photos with similar vectors into “neighborhoods.” When a user uploads a photo of a face into Clearview’s system, it converts the face into a vector and then shows all the scraped photos stored in that vector’s neighborhood — along with the links to the sites from which those images came.

    Mr. Schwartz paid for server costs and basic expenses, but the operation was bare bones; everyone worked from home. “I was living on credit card debt,” Mr. Ton-That said. “Plus, I was a Bitcoin believer, so I had some of those.”

    Image
    Mr. Ton-That showing the results of a search for a photo of himself.Credit...Amr Alfiky for The New York Times
    Image
    Going Viral With Law Enforcement
    By the end of 2017, the company had a formidable facial recognition tool, which it called Smartcheckr. But Mr. Schwartz and Mr. Ton-That weren’t sure whom they were going to sell it to.

    Maybe it could be used to vet babysitters or as an add-on feature for surveillance cameras. What about a tool for security guards in the lobbies of buildings or to help hotels greet guests by name? “We thought of every idea,” Mr. Ton-That said.

    One of the odder pitches, in late 2017, was to Paul Nehlen — an anti-Semite and self-described “pro-white” Republican running for Congress in Wisconsin — to use “unconventional databases” for “extreme opposition research,” according to a document provided to Mr. Nehlen and later posted online. Mr. Ton-That said the company never actually offered such services.

    The company soon changed its name to Clearview AI and began marketing to law enforcement. That was when the company got its first round of funding from outside investors: Mr. Thiel and Kirenaga Partners. Among other things, Mr. Thiel was famous for secretly financing Hulk Hogan’s lawsuit that bankrupted the popular website Gawker. Both Mr. Thiel and Mr. Ton-That had been the subject of negative articles by Gawker.

    “In 2017, Peter gave a talented young founder $200,000, which two years later converted to equity in Clearview AI,” said Jeremiah Hall, Mr. Thiel’s spokesman. “That was Peter’s only contribution; he is not involved in the company.”

    Even after a second funding round in 2019, Clearview remains tiny, having raised $7 million from investors, according to Pitchbook, a website that tracks investments in start-ups. The company declined to confirm the amount.

    In February, the Indiana State Police started experimenting with Clearview. They solved a case within 20 minutes of using the app. Two men had gotten into a fight in a park, and it ended when one shot the other in the stomach. A bystander recorded the crime on a phone, so the police had a still of the gunman’s face to run through Clearview’s app.

    They immediately got a match: The man appeared in a video that someone had posted on social media, and his name was included in a caption on the video. “He did not have a driver’s license and hadn’t been arrested as an adult, so he wasn’t in government databases,” said Chuck Cohen, an Indiana State Police captain at the time.

    The man was arrested and charged; Mr. Cohen said he probably wouldn’t have been identified without the ability to search social media for his face. The Indiana State Police became Clearview’s first paying customer, according to the company. (The police declined to comment beyond saying that they tested Clearview’s app.)

    Clearview deployed current and former Republican officials to approach police forces, offering free trials and annual licenses for as little as $2,000. Mr. Schwartz tapped his political connections to help make government officials aware of the tool, according to Mr. Ton-That. (“I’m thrilled to have the opportunity to help Hoan build Clearview into a mission-driven organization that’s helping law enforcement protect children and enhance the safety of communities across the country,” Mr. Schwartz said through a spokeswoman.)

    The company’s main contact for customers was Jessica Medeiros Garrison, who managed Luther Strange’s Republican campaign for Alabama attorney general. Brandon Fricke, an N.F.L. agent engaged to the Fox Nation host Tomi Lahren, said in a financial disclosure report during a congressional campaign in California that he was a “growth consultant” for the company. (Clearview said that it was a brief, unpaid role, and that the company had enlisted Democrats to help market its product as well.)

    The company’s most effective sales technique was offering 30-day free trials to officers, who then encouraged their acquisition departments to sign up and praised the tool to officers from other police departments at conferences and online, according to the company and documents provided by police departments in response to public-record requests. Mr. Ton-That finally had his viral hit.

    In July, a detective in Clifton, N.J., urged his captain in an email to buy the software because it was “able to identify a suspect in a matter of seconds.” During the department’s free trial, Clearview had identified shoplifters, an Apple Store thief and a good Samaritan who had punched out a man threatening people with a knife.

    Photos “could be covertly taken with telephoto lens and input into the software, without ‘burning’ the surveillance operation,” the detective wrote in the email, provided to The Times by two researchers, Beryl Lipton of MuckRock and Freddy Martinez of Open the Government. They discovered Clearview late last year while looking into how local police departments are using facial recognition.

    According to a Clearview sales presentation reviewed by The Times, the app helped identify a range of individuals: a person who was accused of sexually abusing a child whose face appeared in the mirror of someone’s else gym photo; the person behind a string of mailbox thefts in Atlanta; a John Doe found dead on an Alabama sidewalk; and suspects in multiple identity-fraud cases at banks.

    In Gainesville, Fla., Detective Sgt. Nick Ferrara heard about Clearview last summer when it advertised on CrimeDex, a list-serv for investigators who specialize in financial crimes. He said he had previously relied solely on a state-provided facial recognition tool, FACES, which draws from more than 30 million Florida mug shots and Department of Motor Vehicle photos.

    Sergeant Ferrara found Clearview’s app superior, he said. Its nationwide database of images is much larger, and unlike FACES, Clearview’s algorithm doesn’t require photos of people looking straight at the camera.

    “With Clearview, you can use photos that aren’t perfect,” Sergeant Ferrara said. “A person can be wearing a hat or glasses, or it can be a profile shot or partial view of their face.”

    He uploaded his own photo to the system, and it brought up his Venmo page. He ran photos from old, dead-end cases and identified more than 30 suspects. In September, the Gainesville Police Department paid $10,000 for an annual Clearview license.

    Federal law enforcement, including the F.B.I. and the Department of Homeland Security, are trying it, as are Canadian law enforcement authorities, according to the company and government officials.

    Despite its growing popularity, Clearview avoided public mention until the end of 2019, when Florida prosecutors charged a woman with grand theft after two grills and a vacuum were stolen from an Ace Hardware store in Clermont. She was identified when the police ran a still from a surveillance video through Clearview, which led them to her Facebook page. A tattoo visible in the surveillance video and Facebook photos confirmed her identity, according to an affidavit in the case.

    ‘We’re All Screwed’
    Mr. Ton-That said the tool does not always work. Most of the photos in Clearview’s database are taken at eye level. Much of the material that the police upload is from surveillance cameras mounted on ceilings or high on walls.

    “They put surveillance cameras too high,” Mr. Ton-That lamented. “The angle is wrong for good face recognition.”

    Despite that, the company said, its tool finds matches up to 75 percent of the time. But it is unclear how often the tool delivers false matches, because it has not been tested by an independent party such as the National Institute of Standards and Technology, a federal agency that rates the performance of facial recognition algorithms.

    “We have no data to suggest this tool is accurate,” said Clare Garvie, a researcher at Georgetown University’s Center on Privacy and Technology, who has studied the government’s use of facial recognition. “The larger the database, the larger the risk of misidentification because of the doppelgänger effect. They’re talking about a massive database of random people they’ve found on the internet.”

    But current and former law enforcement officials say the app is effective. “For us, the testing was whether it worked or not,” said Mr. Cohen, the former Indiana State Police captain.

    One reason that Clearview is catching on is that its service is unique. That’s because Facebook and other social media sites prohibit people from scraping users’ images — Clearview is violating the sites’ terms of service.

    “A lot of people are doing it,” Mr. Ton-That shrugged. “Facebook knows.”

    Jay Nancarrow, a Facebook spokesman, said the company was reviewing the situation with Clearview and “will take appropriate action if we find they are violating our rules.”

    Mr. Thiel, the Clearview investor, sits on Facebook’s board. Mr. Nancarrow declined to comment on Mr. Thiel's personal investments.

    Some law enforcement officials said they didn’t realize the photos they uploaded were being sent to and stored on Clearview’s servers. Clearview tries to pre-empt concerns with an F.A.Q. document given to would-be clients that says its customer-support employees won’t look at the photos that the police upload.

    Clearview also hired Paul D. Clement, a United States solicitor general under President George W. Bush, to assuage concerns about the app’s legality.

    In an August memo that Clearview provided to potential customers, including the Atlanta Police Department and the Pinellas County Sheriff’s Office in Florida, Mr. Clement said law enforcement agencies “do not violate the federal Constitution or relevant existing state biometric and privacy laws when using Clearview for its intended purpose.”

    Mr. Clement, now a partner at Kirkland & Ellis, wrote that the authorities don’t have to tell defendants that they were identified via Clearview, as long as it isn’t the sole basis for getting a warrant to arrest them. Mr. Clement did not respond to multiple requests for comment.

    The memo appeared to be effective; the Atlanta police and Pinellas County Sheriff’s Office soon started using Clearview.

    Because the police upload photos of people they’re trying to identify, Clearview possesses a growing database of individuals who have attracted attention from law enforcement. The company also has the ability to manipulate the results that the police see. After the company realized I was asking officers to run my photo through the app, my face was flagged by Clearview’s systems and for a while showed no matches. When asked about this, Mr. Ton-That laughed and called it a “software bug.”

    “It’s creepy what they’re doing, but there will be many more of these companies. There is no monopoly on math,” said Al Gidari, a privacy professor at Stanford Law School. “Absent a very strong federal privacy law, we’re all screwed.”

    Mr. Ton-That said his company used only publicly available images. If you change a privacy setting in Facebook so that search engines can’t link to your profile, your Facebook photos won’t be included in the database, he said.

    But if your profile has already been scraped, it is too late. The company keeps all the images it has scraped even if they are later deleted or taken down, though Mr. Ton-That said the company was working on a tool that would let people request that images be removed if they had been taken down from the website of origin.

    Woodrow Hartzog, a professor of law and computer science at Northeastern University in Boston, sees Clearview as the latest proof that facial recognition should be banned in the United States.

    “We’ve relied on industry efforts to self-police and not embrace such a risky technology, but now those dams are breaking because there is so much money on the table,” Mr. Hartzog said. “I don’t see a future where we harness the benefits of face recognition technology without the crippling abuse of the surveillance that comes with it. The only way to stop it is to ban it.”

    Where Everybody Knows Your Name
    During a recent interview at Clearview’s offices in a WeWork location in Manhattan’s Chelsea neighborhood, Mr. Ton-That demonstrated the app on himself. He took a selfie and uploaded it. The app pulled up 23 photos of him. In one, he is shirtless and lighting a cigarette while covered in what looks like blood.

    Mr. Ton-That then took my photo with the app. The “software bug” had been fixed, and now my photo returned numerous results, dating back a decade, including photos of myself that I had never seen before. When I used my hand to cover my nose and the bottom of my face, the app still returned seven correct matches for me.

    Police officers and Clearview’s investors predict that its app will eventually be available to the public.

    Mr. Ton-That said he was reluctant. “There’s always going to be a community of bad people who will misuse it,” he said.

    Even if Clearview doesn’t make its app publicly available, a copycat company might, now that the taboo is broken. Searching someone by face could become as easy as Googling a name. Strangers would be able to listen in on sensitive conversations, take photos of the participants and know personal secrets. Someone walking down the street would be immediately identifiable — and his or her home address would be only a few clicks away. It would herald the end of public anonymity.

    Asked about the implications of bringing such a power into the world, Mr. Ton-That seemed taken aback.

    “I have to think about that,” he said. “Our belief is that this is the best use of the technology.”

    Jennifer Valentino-DeVries, Gabriel J.X. Dance and Aaron Krolik contributed reporting. Kitty Bennett contributed research.

    Site Index
    Go to Home Page »
    NEWS
    Home Page
    World
    U.S.
    Politics
    Election 2020
    New York
    Business
    Tech
    Science
    Sports
    Obituaries
    Today's Paper
    Corrections
    OPINION
    Today's Opinion
    Op-Ed Columnists
    Editorials
    Op-Ed Contributors
    Letters
    Sunday Review
    Video: Opinion
    ARTS
    Today's Arts
    Art & Design
    Books
    Dance
    Movies
    Music
    Pop Culture
    Television
    Theater
    Video: Arts
    LIVING
    Automobiles
    Crossword
    Education
    Food
    Health
    Jobs
    Magazine
    Parenting
    Real Estate
    Style
    T Magazine
    Travel
    Love
    MORE
    Reader Center
    Wirecutter
    Live Events
    The Learning Network
    Tools & Services
    N.Y.C. Events Guide
    Multimedia
    Photography
    Video
    Newsletters
    NYT Store
    Times Journeys
    Manage My Account
    SUBSCRIBE
    Home Delivery
    Digital Subscriptions
    Crossword
    Cooking
    Email Newsletters
    Corporate Subscriptions
    Education Rate
    Mobile Applications
    Replica Edition
    Español
    中文网
    Site Information Navigation
    © 2020 The New York Times Company
    NYTCoContact UsWork with usAdvertiseT Brand StudioYour Ad ChoicesPrivacyTerms of ServiceTerms of SaleSite MapHelpSubscriptions
    Do Not Sell My Personal Information


    Support independent journalism. See subscription options

    Capture"

  • Jan 10, 20

    "These guys are the 'Message Force Multipliers' of 2020. If there are no disclosures coming, we need to crowd-source this....


    Just like David Barstow’s explosive investigative report on the Pentagon’s “Message Force Multipliers” in April 2008, Lee Fang at The Intercept has done us a tremendous favor by pointing out that many of the so-called military experts who are making the rounds this week to talk about the U.S.-Iran confrontation in Iraq are in fact paid shills for the defense industry.

    That’s right: David Petraeus, Van Hipp, Jeh Johnson, John Negroponte (and these are just the ones featured in Fang’s piece)—all have ties to the Big 5 contracting companies like Lockheed Martin and Raytheon (whose stocks are soaring in response to recent events) and/or work for venture capital firms that invest in these companies. In fact, General Jack Keane, who is reportedly at the elbow of the president, advising him directly, while alternately appearing on FOX News to congratulate him after launching kinetic attacks like killing Gen. Soleimani, currently serves as a partner for such a firm (SCP Partners) and has worked for General Dynamics and Blackwater.

  • Jan 10, 20

    "A business executive accused of financial crimes in Kuwait is getting support from an all-star cast of famous Americans, including a son of the U.S. president who liberated the Gulf nation and several of President Donald Trump’s allies. They’ve helped generate a torrent of sympathetic media coverage from the Middle East to Washington.

    The boldface names are part of a $4.9 million campaign that also has been marked by subterfuge and deception, including a fake protest, thousands of dollars in payments to some U.S. opinion writers, misleading news reports and a correspondent who may not exist. A review of government filings and an examination of dozens of articles shows just how easily money can warp U.S. press coverage."

  • Jan 07, 20

    "Unseen servers began crawling the web for Chinese articles and posts. The system quickly reorganized the words and sentences into new text. His screen displayed a rapidly increasing tally of the articles generated by his product, which he dubs the “Content Farm Automatic Collection System."

    With the articles in hand, a set of websites that Peng controlled published them, and his thousands of fake social media accounts spread them across the internet, instantly sending manipulated content into news feeds, messaging app inboxes, and search results.

    "I developed this for manipulating public opinion,” Peng told the Reporter, an investigative news site in Taipei, which partnered with BuzzFeed News for this article. He added that automation and artificial intelligence “can quickly generate traffic and publicity much faster than people.”

    The 32-year-old wore Adidas Yeezy sneakers and a gold Rolex as he sat in a two-story office in the industrial part of Taichung that was filled with feng shui items such as a money frog and lucky bamboo. A riot gun, which uses compressed air to fire nonlethal projectiles, rested on his desk. Peng said he bought it for “recreational purposes.”

    In the interview, he detailed his path from sending spam emails as a 14-year-old to, being recruited to help with the 2018 reelection campaign of Najib Razak, the former prime minister of Malaysia.

    Peng’s clients are companies, brands, political parties, and candidates in Asia. “Customers have money, and I don't care what they buy," he said. They’re purchasing an end-to-end online manipulation system, which can influence people on a massive scale — resulting in votes cast, products sold, and perceptions changed.

    Peng’s product is modeled on automation software he saw in China, which he believes no one else outside the mainland has. But while his technology may be unique, his company, Bravo-Idea, is not. There is now a worldwide industry of PR and marketing firms ready to deploy fake accounts, false narratives, and pseudo news websites for the right price.

    If disinformation in 2016 was characterized by Macedonian spammers pushing pro-Trump fake news and Russian trolls running rampant on platforms, 2020 is shaping up to be the year communications pros for hire provide sophisticated online propaganda operations to anyone willing to pay. Around the globe, politicians, parties, governments, and other clients hire what is known in the industry as “black PR” firms to spread lies and manipulate online discourse.

    A BuzzFeed News review — which looked at account takedowns by platforms that deactivated and investigations by security and research firms — found that since 2011, at least 27 online information operations have been partially or wholly attributed to PR or marketing firms. Of those, 19 occurred in 2019 alone.


    Most recently, in late December, Twitter announced it removed more than 5,000 accounts that it said were part of “a significant state-backed information operation” in Saudi Arabia carried out by marketing firm Smaat. The same day, Facebook announced a takedown of hundreds of accounts, pages, and groups that it found were engaged in “foreign and government interference” on behalf of the government of Georgia. It attributed the operation to Panda, an advertising agency in Georgia, and to the country’s ruling party.

    Nathaniel Gleicher, Facebook’s head of cybersecurity policy, told BuzzFeed News “the professionalization of deception” is a growing threat.

    “The broader notion of deception and influence operations has been around for some time, but over the past several years, we have seen [...] companies grow up that basically build their business model around deception,” he said.

    "

  • Jan 04, 20

    "I was alarmed when I learned in 2017 that the company had begun moving forward with the development of a new version of a censored Search product for China, codenamed “Dragonfly.” But Dragonfly was only one of several developments that concerned those of us who still believed in the mantra of “Don’t be evil.” I was also concerned that Cloud executives were actively pursuing deals with the Saudi government, given its horrible record of human rights abuses. Cloud executives made no secret of the fact that they wanted to hire their own policy team, which would effectively block any review of their contracts by my team. Finally, in December 2017, Google announced the establishment of the Google Center for Artificial Intelligence in Beijing — something that completely surprised me, and made it clear to me that I no longer had the ability to influence the numerous product developments and deals being pursued by the company.
    My solution was to advocate for the adoption of a company-wide, formal Human Rights Program that would publicly commit Google to adhere to human rights principles found in the UN Declaration of Human Rights, provide a mechanism for product and engineering teams to seek internal review of product design elements, and formalize the use of Human Rights Impact Assessments for all major product launches and market entries.
    But each time I recommended a Human Rights Program, senior executives came up with an excuse to say no. At first, they said human rights issues were better handled within the product teams, rather than starting a separate program. But the product teams weren’t trained to address human rights as part of their work. When I went back to senior executives to again argue for a program, they then claimed to be worried about increasing the company’s legal liability. We provided the opinion of outside experts who re-confirmed that these fears were unfounded. At this point, a colleague was suddenly re-assigned to lead the policy team discussions for Dragonfly. As someone who had consistently advocated for a human rights-based approach, I was being sidelined from the on-going conversations on whether to launch Dragonfly. I then realized that the company had never intended to incorporate human rights principles into its business and product decisions. Just when Google needed to double down on a commitment to human rights, it decided to instead chase bigger profits and an even higher stock price.
    It was no different in the workplace culture. Senior colleagues bullied and screamed at young women, causing them to cry at their desks. At an all-hands meeting, my boss said, “Now you Asians come to the microphone too. I know you don’t like to ask questions.” At a different all-hands meeting, the entire policy team was separated into various rooms and told to participate in a “diversity exercise” that placed me in a group labeled “homos” while participants shouted out stereotypes such as “effeminate” and “promiscuous.” Colleagues of color were forced to join groups called “Asians” and “Brown people” in other rooms nearby.
    In each of these cases, I brought these issues to HR and senior executives and was assured the problems would be handled. Yet in each case, there was no follow up to address the concerns — until the day I was accidentally copied on an email from a senior HR director. In the email, the HR director told a colleague that I seemed to raise concerns like these a lot, and instructed her to “do some digging” on me instead.
    Then, despite being rated and widely known as one of the best people managers at the company, despite 11 years of glowing performance reviews and near-perfect scores on Google’s 360-performance evaluations, and despite being a member of the elite Foundation Program reserved for Google’s “most critical talent” who are “key to Google’s current and future success,” I was told there was no longer a job for me as a result of a “reorganization,” despite 90 positions on the policy team being vacant at the time.
    When I hired counsel, Google assured me that there had been a misunderstanding, and I was offered a small role in exchange for my acquiescence and silence. But for me, the choice was as clear as the situation. I left. Standing up for women, for the LGBTQ community, for colleagues of color, and for human rights — had cost me my career. To me, no additional evidence was needed that “Don’t be evil” was no longer a true reflection of the company’s values; it was now nothing more than just another corporate marketing tool.
    I’ve been asked many times since returning home, “What changed?”
    First, the people. The founders and visionaries behind the company, Larry Page and Sergey Brin, disengaged and left management in the hands of new senior executives. A new CEO was hired to lead Google Cloud and a new CFO was hired from Wall Street, and beating earnings expectations every quarter became the key priority. Every year, thousands of new employees join the company, overwhelming everyone who fought to preserve the company’s original values and culture. When I joined the company there were under 10,000 Googlers and by the time I left, there were over 100,000.
    Second, the products. Some will say that Google was always a bad corporate actor, with less than transparent privacy practices. But there is a significant difference between serving ads based on a Google search and working with the Chinese government on artificial intelligence or hosting the applications of the Saudi government, including Absher, an application that allows men to track and control the movement of their female family members. Executives hell-bent on capturing cloud computing revenue from Microsoft, Oracle, and Amazon had little patience for those of us arguing for some form of principled debate before agreeing to host the applications and data of any client willing to pay.
    I think the important question is what does it mean when one of America’s marque’ companies changes so dramatically. Is it the inevitable outcome of a corporate culture that rewards growth and profits over social impact and responsibility? Is it in some way related to the corruption that has gripped our federal government? Is this part of the global trend toward “strong man” leaders who are coming to power around the globe, where questions of “right” and “wrong” are ignored in favor of self-interest and self-dealing? Finally, what are the implications for all of us when that once-great American company controls so much data about billions of users across the globe?
    Although the causes and the implications are worth debating, I am certain of the appropriate response. No longer can massive tech companies like Google be permitted to operate relatively free from government oversight. As soon as Google executives were asked by Congress about Project Dragonfly and Google’s commitment to free expression and human rights, they assured Congress that the project was exploratory and it was subsequently shut down.
    The role of these companies in our daily lives, from how we run our elections to how we entertain and educate our children, is just too great to leave in the hands of executives who are accountable only to their controlling shareholders who — in the case of Google, Amazon, Facebook and Snap — happen to be fellow company insiders and founders."

  • Jan 04, 20

    EVERY MINUTE OF EVERY DAY, everywhere on the planet, dozens of companies — largely unregulated, little scrutinized — are logging the movements of tens of millions of people with mobile phones and storing the information in gigantic data files. The Times Privacy Project obtained one such file, by far the largest and most sensitive ever to be reviewed by journalists. It holds more than 50 billion location pings from the phones of more than 12 million Americans as they moved through several major cities, including Washington, New York, San Francisco and Los Angeles.

    Each piece of information in this file represents the precise location of a single smartphone over a period of several months in 2016 and 2017. The data was provided to Times Opinion by sources who asked to remain anonymous because they were not authorized to share it and could face severe penalties for doing so. The sources of the information said they had grown alarmed about how it might be abused and urgently wanted to inform the public and lawmakers.

    [Related: How to Track President Trump — Read more about the national security risks found in the data.]


    After spending months sifting through the data, tracking the movements of people across the country and speaking with dozens of data companies, technologists, lawyers and academics who study this field, we feel the same sense of alarm. In the cities that the data file covers, it tracks people from nearly every neighborhood and block, whether they live in mobile homes in Alexandria, Va., or luxury towers in Manhattan.

    One search turned up more than a dozen people visiting the Playboy Mansion, some overnight. Without much effort we spotted visitors to the estates of Johnny Depp, Tiger Woods and Arnold Schwarzenegger, connecting the devices’ owners to the residences indefinitely.

    If you lived in one of the cities the dataset covers and use apps that share your location — anything from weather apps to local news apps to coupon savers — you could be in there, too.

    If you could see the full trove, you might never use your phone the same way again.

    A typical day at Grand Central Terminal
    in New York City
    Satellite imagery: Microsoft
    THE DATA REVIEWED BY TIMES OPINION didn’t come from a telecom or giant tech company, nor did it come from a governmental surveillance operation. It originated from a location data company, one of dozens quietly collecting precise movements using software slipped onto mobile phone apps. You’ve probably never heard of most of the companies — and yet to anyone who has access to this data, your life is an open book. They can see the places you go every moment of the day, whom you meet with or spend the night with, where you pray, whether you visit a methadone clinic, a psychiatrist’s office or a massage parlor.

    The Times and other news organizations have reported on smartphone tracking in the past. But never with a data set so large. Even still, this file represents just a small slice of what’s collected and sold every day by the location tracking industry — surveillance so omnipresent in our digital lives that it now seems impossible for anyone to avoid.

    Freaked Out?
    3 Steps to Protect Your Phone

    It doesn’t take much imagination to conjure the powers such always-on surveillance can provide an authoritarian regime like China’s. Within America’s own representative democracy, citizens would surely rise up in outrage if the government attempted to mandate that every person above the age of 12 carry a tracking device that revealed their location 24 hours a day. Yet, in the decade since Apple’s App Store was created, Americans have, app by app, consented to just such a system run by private companies. Now, as the decade ends, tens of millions of Americans, including many children, find themselves carrying spies in their pockets during the day and leaving them beside their beds at night — even though the corporations that control their data are far less accountable than the government would be.

    [Related: Where Even the Children Are Being Tracked — We followed every move of people in one city. Then we went to tell them.]


    “The seduction of these consumer products is so powerful that it blinds us to the possibility that there is another way to get the benefits of the technology without the invasion of privacy. But there is,” said William Staples, founding director of the Surveillance Studies Research Center at the University of Kansas. “All the companies collecting this location information act as what I have called Tiny Brothers, using a variety of data sponges to engage in everyday surveillance.”

    In this and subsequent articles we’ll reveal what we’ve found and why it has so shaken us. We’ll ask you to consider the national security risks the existence of this kind of data creates and the specter of what such precise, always-on human tracking might mean in the hands of corporations and the government. We’ll also look at legal and ethical justifications that companies rely on to collect our precise locations and the deceptive techniques they use to lull us into sharing it.

    Today, it’s perfectly legal to collect and sell all this information. In the United States, as in most of the world, no federal law limits what has become a vast and lucrative trade in human tracking. Only internal company policies and the decency of individual employees prevent those with access to the data from, say, stalking an estranged spouse or selling the evening commute of an intelligence officer to a hostile foreign power.

    Companies say the data is shared only with vetted partners. As a society, we’re choosing simply to take their word for that, displaying a blithe faith in corporate beneficence that we don’t extend to far less intrusive yet more heavily regulated industries. Even if these companies are acting with the soundest moral code imaginable, there’s ultimately no foolproof way they can secure the data from falling into the hands of a foreign security service. Closer to home, on a smaller yet no less troubling scale, there are often few protections to stop an individual analyst with access to such data from tracking an ex-lover or a victim of abuse.

    A DIARY OF YOUR EVERY MOVEMENT
    THE COMPANIES THAT COLLECT all this information on your movements justify their business on the basis of three claims: People consent to be tracked, the data is anonymous and the data is secure.

    None of those claims hold up, based on the file we’ve obtained and our review of company practices.

    Yes, the location data contains billions of data points with no identifiable information like names or email addresses. But it’s child’s play to connect real names to the dots that appear on the maps.

    Here’s what that looks like."

  • Nov 29, 19

    "Andrew Yang, the Democratic presidential candidate best known for his $1,000-per-month universal basic income platform, says tech giants should pay people for use of their data.

    “Right now, our data is worth more than oil,” Yang said during Tuesday’s Democratic debate in Ohio. “How many of you remember getting your data check in the mail? It got lost. It went to Facebook, Amazon, Google.”

    Yang offered the idea as a way to handle such tech megaliths instead of just breaking them up (which some, like Democratic candidate Sen. Elizabeth Warren, have advocated).

    Though there are “absolutely excesses in technology and in some cases having them divest parts of their business is the right move,” according to Yang, “we also have to be realistic that competition doesn’t solve all the problems.

    “It’s not like any of us wants to use the fourth-best navigation app. That would be like cruel and unusual punishment. There is a reason why no one is using Bing today. Sorry, Microsoft. It’s true,” Yang said during Tuesday’s debate.

    Yang said that if “we say this [data] is our property and we share in the gains, that’s the best way we can balance the scales against the big tech companies,” calling it “the best way we can fight back.”

    According to Yang, companies should first be required to have customers opt-in to data collection, with a “clear and easy-to-understand statement about what is being collected and how it is going to be used.” Those who do opt-in and share personal data with companies should then “receive a share of the economic value generated from your data.”

    “Every time we post a photo or interact with a social media company, we’re putting information out there, and that information should still be ours. If somebody is profiting from our data and we decide willingly to partner with a company that’s making use of this information, then that’s only fair as long as we get a slice,” Yang told The New York Times in a story published Tuesday. “Right now we’re unaware of the value that’s changing hands and we’re definitely not getting a data check in the mail every season.”

    Right now, our data is worth more than oil.
    Andrew Yang
    DEMOCRATIC PRESIDENTIAL HOPEFUL
    The idea to pay users for their data is not unique.

    “California, Colorado, Canada federal [lawmakers] and U.S. federal [lawmakers] are all working on this heavily and many presidential candidates are considering it. It has bipartisan support. So there is definitely something there and expect to hear more about it,” Glen Weyl, founder and chair of the nonprofit RadicalxChange Foundation, tells CNBC Make It. Weyl, whose group advocates for equitable distribution of the data economy, also works at Microsoft researching how technology will affect, and be affected by, global political and economic challenges.

    In his State of the State address in February, California Gov. Gavin Newsom said consumers should be compensated for use of their data.

    “I’ve asked my team to develop a proposal for a new Data Dividend for Californians, because we recognize that your data has value and it belongs to you,” he said.

    And in June, Sens. Mark R. Warner, D-Va., and Josh Hawley, R-Mo., proposed a bill that would require companies that use consumer data and have over 100 million monthly active users to disclose to consumers what data they are collecting and how it is being used to make a profit.

    “For years, social media companies have told consumers that their products are free to the user. But that’s not true – you are paying with your data instead of your wallet,” Warner said in a statement at the time the bill was released.

    The bill is pending, but Warner hopes to include it in larger data privacy legislation in 2019 or 2020 to become law, according to his communications director, Rachel Cohen.

    Still, despite the germinating interest in paying for data, “there is a big question of what this actually means and how it happens,” Weyl said.

    He says consumers banding together to lobby the tech companies is the most plausible route, citing labor unions as an example of such a movement. “It would be challenging,” Weyl tells CNBC Make It, adding that “legislation can help stimulate this new sector” as well.

    Paying money to each person whose data is used by companies like Facebook (which currently has 2.7 billion monthly active users on at least one of its family of services, including Instagram, WhatsApp and Messenger) would also require establishing the infrastructure and logistical organization to pay those billions of people. Some industry insiders have said this is too complicated to be realistic.

    “You can’t control data,” Nancy Kim, a professor of law and internet studies at the California Western School of Law, told Engadget in 2018. “It’s not like I give you something tangible and I say every time you rent that tangible thing out, you give me a royalty, a certain payment.”

    But according to Yang, “I guarantee you that if it was reversed and it was tech companies that needed to extract tolls from millions of consumers, there’d be zero issue with the administrative barrier,” he told the Times. “The company would be like, ‘I need your credit card.’”

    Facebook, Google and Amazon did not respond to CNBC Make It’s requests for comment.

  • Nov 29, 19

    "Another social media giant partnering with the military-industrial complex is Facebook. The California-based company announced last year it was working closely with the neoconservative think tank, The Atlantic Council, which is largely funded by Saudi Arabia, Israel and weapons manufacturers to supposedly fight foreign “fake news.”

    The Atlantic Council is a NATO offshoot and its board of directors reads like a rogue’s gallery of warmongers, including the notorious Henry Kissinger, Bush-era hawks like Condoleezza Rice, Colin Powell, James Baker, the former head of the Department of Homeland Security and author of the PATRIOT Act, Michael Chertoff, a number of former Army Generals including David Petraeus and Wesley Clark and former heads of the CIA Michael Hayden, Leon Panetta and Michael Morell.

    39 percent of Americans, and similar numbers of people in other countries, get their news from Facebook, so when an organization like the Atlantic Council is controlling what the world sees in their Facebook news feeds, it can only be described as state censorship on a global level.

    After working with the council, Facebook immediately began banning and removing accounts linked to media in official enemy states like Iran, Russia and Venezuela, ensuring the world would not be exposed to competing ideas and purging dissident voices under the guise of fighting “fake news” and “Russian bots.”"

  • Nov 29, 19

    "Facebook has been accused of providing a platform for disinformation during the last presidential election, and has been criticized for refusing to remove ads for President Donald Trump’s reelection that include false information.

    The company has said it is taking measures to respond to the criticism. Zuckerberg is expected to testify on Wednesday to House lawmakers about Facebook’s impact on the financial services and housing sectors.

    Buttigieg has criticized Facebook and the other Big Tech platforms, though he has not gone as far as Warren. In April, Buttigieg said that he would empower the FTC to take on a heightened regulatory role when it came to Facebook and the other platforms. In May, Buttigieg said that Facebook co-founder Chris Hughes “made a very convincing case” about how Zuckerberg and other tech executives had too much power. Hughes has since left the company.

    The two individuals Zuckerberg and his wife recommended are now on staff, according to Meagher. Both of their roles appear to include working with technology. Eric Mayefsky works as senior digital analytics advisor and Nina Wornhoff serves as organizing data manager.

    Wornhoff joined the campaign in April after working as a machine learning engineer at the Chan Zuckerberg Initiative, according to her LinkedIn page. Mayefsky joined in June after working as the director of data science at Quora. He previously worked at Facebook from 2010 to 2013, his LinkedIn profile says.

    Wornhoff and Mayefsky did not immediately respond to CNBC’s request for comment. Zuckerberg, in a conference call with reporters later Monday, said his actions should not be taken as an endorsement.

    “Since the beginning of the campaign, we’ve built a top-tier operation with more than 430 staff in South Bend and around the country,” Meagher said in a statement provided to Bloomberg. “The staffers come from all types of background, and everyone is working hard every day to elect Pete to the White House.”

    Buttigieg, the mayor of South Bend, has performed well among donors in Silicon Valley since unofficially launching his presidential bid in January."

  • Nov 25, 19

    "A new report from Amnesty International accuses Facebook and Google of having a "surveillance-based business model" that threatens users' right to privacy and other human rights.

    The tech giants, said Kumi Naidoo, secretary general of Amnesty International, have amassed "unparalleled power over the digital world by harvesting and monetizing the personal data of billions of people. Their insidious control of our digital lives undermines the very essence of privacy and is one of the defining human rights challenges of our era."

    Facebook and Google, according to the report, deserve to be singled out of the so-called Big 5 for their outsize influence on internet users.

    With Facebook controlling not only its eponymous social media platform but also WhatsApp, Messenger, and Instagram, and Google parent company Alphabet in control of YouTube and the Android mobile operating system as well as the search engine, the companies "control the primary channels that people rely on to engage with the internet."

    In fact, the report continues, the two companies control "an architecture of surveillance that has no basis for comparison in human history."

    The use of the platforms isn't really free, the report argues. Users are faced with "a Faustian bargain, whereby they are only able to enjoy their human rights online by submitting to a system predicated on human rights abuse."

    The companies hoover up user data—as well as metadata like email recipients—and "they are using that data to infer and create new information about us," relying in part on artificial intelligence (AI).

    The report says that "as a default Google stores search history across all of an individual's devices, information on every app and extension they use, and all of their YouTube history, while Facebook collects data about people even if they don't have a Facebook account."

    Smart phones also offer the companies a "rich source of data," but the reach of surveillance doesn't stop there. From the report:

    This includes the inside of people's homes through the use of Home Assistants like Google's Assistant and Facebook's Portal, and smart home systems connecting multiple devices such as phones, TVs, and heating systems. Increasingly, data extraction is also stretching to public spaces through 'smart city' infrastructure designed to collect data throughout an urban area. Facebook is even developing technology that would enable tracking the inside of the human brain.

    The trove of data and metadata—which represent a "honeypot" for potential government eyes—"potentially could be used to infer sensitive information about a person, such as their sexual identity, political views, personality traits, or sexual orientation using sophisticated algorithmic models."

    "These inferences can be derived regardless of the data provided by the user," the report adds, "and they often control how individuals are viewed and evaluated by third parties: for example, in the past third parties have used such data to control who sees rental ads and to decide on eligibility for loans."

    Amnesty's report says that "the very nature of targeting, using data to infer detailed characteristics about people, means that Google and Facebook are defining our identity to the outside world, often in a host of rights-impacting contexts. This intrudes into our private lives and directly contradicts our right to informational self-determination, to define our own identities within a sphere of privacy."

    The companies have a track record of privacy abuses. Among the examples noted in the report:

    In 2018 journalists discovered that Google keeps location tracking on even when you have disabled it. Google subsequently revised the description of this function after the news story but has not disabled location tracking even after users turn off Location History. Google now faces legal action by Australia's competition watchdog over the issue.
    Facebook has acknowledged that it knew about the data abuses of political micro-targeting firm Cambridge Analytica months before the scandal broke.
    Facebook has also acknowledged performing behavioural experiments on groups of people—nudging groups of voters to vote, for example, or lifting (or depressing) users' moods by showing them different posts on their feed.
    Facebook and Google, the report says, "have conditioned access to their services on 'consenting' to processing and sharing of their personal data for marketing and advertising, directly countering the right to decide when and how our personal data can be shared with others."

    The potential violations don't end with privacy attacks because "a person may only give up some seemingly innocuous data such as what they 'like' on Facebook. But once aggregated, that data can be repurposed to deliver highly targeted advertising, political messages, and propaganda, or to grab people's attention and keep them on the platform."

    Keeping users on the platform, the report says, means they see more ads and potentially click more ads, thereby creating more data in a cycle of corporate surveillance.

    That in turn threatens people's right to autonomy and free development of ideas because these targeted ads "can influence, shape, and modify opinions and thoughts."

    From the report:

    The starkest and most visible example of how Facebook and Google's capabilities to target people at a granular level can be misused is in the context of political campaigning—the most high-profile case being the Cambridge Analytica scandal. The same mechanisms and tools of persuasion used for the purposes of advertising can be deployed to influence and manipulate people's political opinions. The use of microtargeting for political messaging can also limit people's freedom of expression by "creating a curated worldview inhospitable to pluralistic political discourse."

    Such abuses and potential abuses, says the report, make clear the era of tech self-regulation must come to end, with governments and companies alike taking steps to address the rights violations.

    The report calls on governments to ensure companies are prevented from making access to their services conditional on user consenting to the collection, processing, or sharing of their personal data for marketing or advertising. They must also enact legislation to ensure the right not to be tracked and to "ensure companies are held legally accountable for human rights harms linked to such systems."

    Companies must switch to a model that respects rights and provide transparency about abuses they identify and remedies they will provide. They must also not lobby for weakened data protection and privacy legislation.

    "Google and Facebook chipped away at our privacy over time," Naidoo added in his statement. "We are now trapped. Either we must submit to this pervasive surveillance machinery—where our data is easily weaponized to manipulate and influence us—or forego the benefits of the digital world. This can never be a legitimate choice."

    "We must reclaim this essential public square," he continued, "so we can participate without having our rights abused.""

  • Nov 25, 19

    "Last week a new Facebook challenge went viral asking users to post a photo from 10 years ago and one from today captioning “how did aging effect you?” Now being called the “10-Year Challenge.” Over 5.2 million, including many celebrities, participating in this challenge. It follows closely after the “Bird Box Challenge” and the “Top Nine Photos of the Year Challenge” but this one has caused quite a stir and some concern from users.

    Speculation arose about the motive behind this viral challenge and had users questioning if this was a ploy by Facebook to use for facial recognition data. Kate O’Neill, a writer for Wired, wrote an op-ed exploring the possibility that this was more than just a fun challenge to share with friends.

    “Imagine that you wanted to train a facial recognition algorithm on age-related characteristics and, more specifically, on age progression (e.g., how people are likely to look as they get older). Ideally, you'd want a broad and rigorous dataset with lots of people's pictures. It would help if you knew they were taken a fixed number of years apart—say, 10 years,” said O’Neill.


    However, many people have argued that Facebook already has access to these photos since the challenge often asked people to share their first profile picture to their current. O’Neill argued that people don’t always upload in chronological order and many people have profile pictures other than themselves like cartoons, family members, animals, political statements, etc. This challenge gives Facebook the opportunity to have a “clean” version of who you are by the context you add such as telling your age in the photo, year or other given information that you share in the post."

  • Nov 21, 19

    "One causality in this propaganda war is Daniel McAdams, Executive Director of the Ron Paul Institute for Peace and Prosperity, a public advocacy group that argues that a non-interventionist foreign policy is crucial to securing a prosperous society at home. McAdams served as Senator Paul’s foreign affairs advisor between 2001 and 2012. Before that, he was a journalist and editor for the Budapest Sun and a human rights monitor across Eastern Europe. 

    McAdams, who spent much of his time on Twitter calling out the war machine supported by both parties, was recently permanently banned from the platform for so-called “hateful conduct.” His crime? Challenging Fox News anchor Sean Hannity over his hour-long segment claiming to be against the “deep state,” while simultaneously wearing a CIA lapel pin. In the exchange, McAdams called Hannity “retarded,” claiming he was becoming stupider every time he watched him.

    Yes, despite that word and its derivatives having been used on Twitter over ten times in the previous minute, and often much more aggressively than McAdams used it – only McAdams fell victim to Twitter’s ban hammer. Something didn’t make sense about this ban. One only needs to read the replies under any of President Trump’s tweets to see far more hateful speech than what McAdams displayed to suspect foul play.

    I spoke with McAdams about the ban and began by asking him if he accepts the premise of the ban, or if he believes something else was afoot.

    "

  • Nov 21, 19

    "With a surprising new proof, two young mathematicians have found a bridge across the finite-infinite divide, helping at the same time to map this strange boundary.

    The boundary does not pass between some huge finite number and the next, infinitely large one. Rather, it separates two kinds of mathematical statements: “finitistic” ones, which can be proved without invoking the concept of infinity, and “infinitistic” ones, which rest on the assumption — not evident in nature — that infinite objects exist.

    Mapping and understanding this division is “at the heart of mathematical logic,” said Theodore Slaman, a professor of mathematics at the University of California, Berkeley. This endeavor leads directly to questions of mathematical objectivity, the meaning of infinity and the relationship between mathematics and physical reality.

    More concretely, the new proof settles a question that has eluded top experts for two decades: the classification of a statement known as “Ramsey’s theorem for pairs,” or RT22. Whereas almost all theorems can be shown to be equivalent to one of a handful of major systems of logic — sets of starting assumptions that may or may not include infinity, and which span the finite-infinite divide — RT22 falls between these lines. “This is an extremely exceptional case,” said Ulrich Kohlenbach, a professor of mathematics at the Technical University of Darmstadt in Germany. “That’s why it’s so interesting.”

    In the new proof, Keita Yokoyama, 34, a mathematician at the Japan Advanced Institute of Science and Technology, and Ludovic Patey, 27, a computer scientist from Paris Diderot University, pin down the logical strength of RT22 — but not at a level most people expected. The theorem is ostensibly a statement about infinite objects. And yet, Yokoyama and Patey found that it is “finitistically reducible”: It’s equivalent in strength to a system of logic that does not invoke infinity. This result means that the infinite apparatus in RT22 can be wielded to prove new facts in finitistic mathematics, forming a surprising bridge between the finite and the infinite. “The result of Patey and Yokoyama is indeed a breakthrough,” said Andreas Weiermann of Ghent University in Belgium, whose own work on RT22 unlocked one step of the new proof."

  • Nov 20, 19

    "As I posted yesterday, I was suspended on Twitter late in the morning even though I couldn’t imagine having violated any of the platform’s rules – or at least the best known ones, which seek to bar bullying and hate speech and other such noxious practices. (Not that I’m saying I agree with this Twitter policy, largely because of related free speech and definitional concerns, but that’s a separate issue.)

    Late in the afternoon, I was pleased to learn that I had been reinstated. I was also pleased that Twitter responded in detail to my request for an explanation for its decision – though I must confess to being puzzled by its rationale, and by its belief (or by the parameters used by the algorithms that apparently make most of these calls) responsible for the suspension.

    According to Twitter, I had been:

    >”using a trending or popular hashtag with an intent to subvert or manipulate a conversation or to drive traffic or attention to accounts, websites, products, services, or initiatives”; and

    >”tweeting with excessive, unrelated hashtags in a single Tweet or across multiple tweets.”

    For those of you unfamiliar with the hashtag thing, it involves putting the symbol that looks like a tic-tac-toe puzzle in front of a term in order to capitalize on that term’s popularity in Twitter-verse in order to call attention to a Tweet. So for example, in Tweets I send out naming the President, I  use #Trump. In Tweets I send out about the monthly U.S. jobs reports, I use #jobs. And typically, since individual Tweets usually included several such terms, these Tweets would include multiple hashtags. (E.g., #jobs and #economy.)

    Since one of my main purposes in Tweeting is reaching the largest possible audience with my material, I thought the practice completely natural. And P.S. – I’m far from the only Tweeter who uses it (although I have acquired something of a reputation for using them frequently).

    As a result, I’m completely mystified by the claim that I’ve used hashtags “excessively.” And I’m totally baffled at also being accused of using “unrelated hashtags” – since all those I included would be bearing on the Tweet’s main subject.

    Have I been using “trending or popular hashtags” to “subvert or manipulate a conversation”? What on earth does that mean? And as for “driving traffic to accounts”? Of course, as mentioned above, I’ve been hoping to attract attention to my own. But that’s the whole point of using hashtags – and of Twitter offering the feature in the first place!

    Finally, the only “website, product, service, or initiative” I’ve ever used hashtags, excessively or not, to promote have been RealityChek, outside freelance articles and media appearances of mine, and work by others (including articles and posts and other material) that I believe merit attention. If that’s my crime, I’m guilty as charged. But what could possibly be wrong with any of the above objectives?

    Of course I’m glad that all worked out for the best, and that Twitter evidently judged my transgressions mild enough to warrant quick reinstatement. But contrary to my speculation yesterday, it wasn’t an entirely innocent mistake, or accident on the platform’s part. And it should be clear that if Twitter’s stated rules and parameters caught me, they’re way to broad and vague, and need serious rethinking."

  • Nov 18, 19

    "Here’s some of what the WSJ revealed in its investigation published last week:

    More than 100 interviews and the Journal’s own testing of Google’s search results reveal:

    • Google made algorithmic changes to its search results that favor big businesses over smaller ones, and in at least one case made changes on behalf of a major advertiser, eBay Inc., contrary to its public position that it never takes that type of action. The company also boosts some major websites, such as Amazon.com Inc.and Facebook Inc., according to people familiar with the matter. 

    • Google engineers regularly make behind-the-scenes adjustments to other information the company is increasingly layering on top of its basic search results. These features include auto-complete suggestions, boxes called “knowledge panels” and “featured snippets,” and news results, which aren’t subject to the same company policies limiting what engineers can remove or change.

    • Despite publicly denying doing so, Google keeps blacklists to remove certain sites or prevent others from surfacing in certain types of results. These moves are separate from those that block sites as required by U.S. or foreign law, such as those featuring child abuse or with copyright infringement, and from changes designed to demote spam sites, which attempt to game the system to appear higher in results.

    • In auto-complete, the feature that predicts search terms as the user types a query, Google’s engineers have created algorithms and blacklists to weed out more-incendiary suggestions for controversial subjects, such as abortion or immigration, in effect filtering out inflammatory results on high-profile topics. 

    • Google employees and executives, including co-founders Larry Page and Sergey Brin, have disagreed on how much to intervene on search results and to what extent. Employees can push for revisions in specific search results, including on topics such as vaccinations and autism. 

    • To evaluate its search results, Google employs thousands of low-paid contractors whose purpose the company says is to assess the quality of the algorithms’ rankings. Even so, contractors said Google gave feedback to these workers to convey what it considered to be the correct ranking of results, and they revised their assessments accordingly, according to contractors interviewed by the Journal. The contractors’ collective evaluations are then used to adjust algorithms.

    This comes down to power and control, and the tech giants are now maturing into their predictable role as algorithmic gatekeepers of a new digital feudalism. Google has the power to shape your mind by limiting what you have access to, while at the same time wielding the power to destroy your livelihood with a tweak of an algorithm. Although a lot of the most nefarious stuff is still being conducted at the margins so the masses don’t realize what’s happening, stealth censorship will continue to be rolled out until the internet most people use becomes for all practical purposes an information gulag where nothing but shameless propaganda is pumped onto screens by hidden algorithms tweaked (for your own good) by billionaires.

    A perfect example of this can be seen in how YouTube hides ones of the most popular videos ever made regarding the attacks of September 11, 2001. The short clip made by James Corbett, is titled 9/11: A Conspiracy Theory, and has over 3.2 million views. Nevertheless, here’s what YouTube spits out if you search by the exact title of the video.



    Keep scrolling and you still won’t find it. This isn’t YouTube helping users find the information they want, it’s YouTube hiding content from its users. Moreover, the only reason I’m aware of the censoring of this particular item is because I’m familiar with the video from years ago. You can be certain this sort of thing is more common than you realize and will only get worse.

    The internet was supposed to free information while connecting people and ideas across borders. This promise is being lost with each passing day, and rectifying the situation is one of the most significant challenges we face. Should we fail, we can look forward to a future where humanity consists of little more than digitally lobotomized automatons responding like lab rats to algorithms created by tech CEOs and their national security state partners."

  • Nov 16, 19

    "And what they do with all your information takes us down a rabbit hole where innocent targeting advertising becomes weaponized behavioral manipulation.

    Power users tend to dominate your feed and the users with more radical opinions get more likes and followers. If you follow these people and support these opinions, Facebook’s algorithm then retargets you with ads and promoted posts to reinforce this point of view. These repeated exposures solidify a new, radicalized opinion and the vicious cycle keeps spinning."

  • Nov 14, 19

    "This 30+ page technical slide deck explains how Google is able to leverage contemporary data science and machine learning to classify web sites, visitors, and advertisers based on contextual analysis. It also explains how authoritative journalists, raters, blue checks, and narrative owners generate hit pieces, articles, and content designed to exploit Google's machine learning to attack arbitrary opponents.

     Google has to continuously update their contextual associations to ensure their product can connect advertisers to potential consumers. Their systems are continuously trained by a large volume of daily content changes. Google provides "authoritativeness" ratings to content associated with to "authoritative" people and organizations, allowing them give them greater influence over the contextual associations Google forms. If these authoritative actors all create their content in unison to make sure Google's machine learning establishes specific contextual associations, they can engage in vanishing gradient stuffing, which is a clever way to exploit an underlying property of machine learning to hide non-authoritative content from control search engine behavior, advertising and commercial opportunities, and future content creation.

     You can download this 30+ page technical slide deck to understand precisely how journalists are actively engaging in political censorship.

     Much of this research was derived from Zach Vorhies's leaks. You can quickly search those leaks at https://google-leak.surge.sh."

  • Nov 12, 19

    "Google is engaged with one of the U.S.’s largest health-care systems on a project to collect and crunch the detailed personal-health information of millions of people across 21 states.

    The initiative, code-named “Project Nightingale,” appears to be the biggest effort yet by a Silicon Valley giant to gain a toehold in the health-care industry through the handling of patients’ medical data. Amazon.com Inc., Apple Inc. and Microsoft Corp. are also aggressively pushing into health care, though they haven’t yet struck deals of...

    "

1 - 20 of 2736 Next › Last »
20 items/page
List Comments (0)