The Integrity and Commitment of RSN
I'm truly delighted to finally send you a message voicing my thanks for a news service which provides me with information from some of the finest journalists and writers in the reporting business.
I especially am grateful for the integrity and commitment of RSN in presenting the truth and the level of intelligence in the selection of the articles we are fortunate to be able to receive.
Thank you!
Florence, RSN Reader-Supporter
If you would prefer to send a check:
Reader Supported News
PO Box 2043
Citrus Hts, CA 95611
It's Live on the HomePage Now:
Reader Supported News
Heather Digby Parton | The Entire Trump Campaign Was a Scam - and It Is Not Over
Heather Digby Parton, Salon
Parton writes: "During the 2016 presidential campaign, candidate Donald Trump happened to be in the middle of a major federal class-action lawsuit spanning several states over an allegedly fraudulent operation called Trump University."
The New York Times reports that Trump's issued $122 million in donation refunds. How could so many be duped again?
uring the 2016 presidential campaign, candidate Donald Trump happened to be in the middle of a major federal class-action lawsuit spanning several states over an allegedly fraudulent operation called Trump University. You may recall that one of his first racist scandals during the 2015 primary campaign came about after he claimed the judge in that federal fraud case was biased against Trump because of his Hispanic heritage. The Trump University suit was a big story during that campaign but, as always, there was so much chaos surrounding Trump that I'm not sure people really understood what it was all about. It should have been the biggest story because it was unfolding during the campaign and illustrated everything the people needed to know about Donald Trump. It showed, in living color, that Trump was a real, bonafide con artist, in the literal sense of the word.
The grift was pretty simple. It started off as an online operation that quickly morphed into one of those bait and switch operations where they entice you to come to listen to a free lecture from some "expert" to teach you the tricks of the trade (or tell you the secret of life) which turns out to be nothing more than a sales pitch to buy more expert lessons in the same subject — which also turn out to be sales pitches. It's what a lot of multi-level marketing schemes and frankly, cults, do to bilk people out of their savings. A 2017 report from the Center for American Progress explains further:
Near the end, Trump University focused almost exclusively on the seminars, both running them and licensing the brand name out to an organization called Business Strategies Group. These seminars often began with a free session to get people in the door. Once individuals arrived, salespeople often tried to upsell them the "Trump Elite Packages," ranging from the Bronze Elite Package for $9,995 up to the Gold Elite Package for $34,995.
Trump, of course, had a TV show in which he pretended to be a genius businessman and that was enough to get a lot of naive fans to sign on, apparently believing the lies in the brochures, which said that Trump had personally chosen the instructors and the so-called courses were credentialed by major universities like Stanford and Northwestern. The court case showed that none of that was true. And according to the Washington Post, Trump was personally involved in all the advertising that made those claims.
And despite pressure from the leaders of the seminars to write favorable reviews of the "course" there was an unusually high refund request rate from unsatisfied "students." Time magazine reported that it was 32% for the three-day seminar and 16% for the Gold Elite package.
Trump eventually settled the fraud case for $25 million after the election, successfully shutting it down before it reached a courtroom. In the end, 6,000 customers were eligible for a piece of the $25 million settlement.
How in the world could an advanced democracy ever elect someone who was so blatantly a con man? It wasn't as if it was far in the past or there was some serious dispute as to whether or not it was really a scam. It was obvious to anyone who looked at the case that there was no "university" and Donald Trump was running a grift. It wasn't the first or the only one but it was being litigated right in the middle of the campaign.
I was reminded of that astonishing story this weekend when I read Shane Goldmacher's shocking New York Times report on the Trump campaign's fundraising practices. If anything, they were even more deceptive than the Trump University con.
Goldmacher reported that the campaign and its online fundraising platform WinRed hustled its most loyal supporters out of tens of millions of dollars with deceptive donation links on their emails and websites. It's unknown to this day how many people unknowingly signed up for weekly recurring donations and "money bombs" (agreements to donate a lump sum on a future date), but there were so many requests for refunds that at one point, 1-3% of all credit card complaints in the U.S. were about WinRed charges.
The credit card companies told the Times that they were inundated with complaints and requests to cancel cards:
"It started to go absolutely wild," said one fraud investigator with Wells Fargo. "It just became a pattern," said another at Capital One. A consumer representative for USAA, which primarily serves military families, recalled an older veteran who discovered repeated WinRed charges from donating to Mr. Trump only after calling to have his balance read to him by phone.
The unintended payments busted credit card limits. Some donors canceled their cards to avoid recurring payments. Others paid overdraft fees to their bank. There is no way of knowing how many people just paid the bills, either thinking they had no recourse or failing to notice it.
The Times compared the GOP's WinRed donation platform to the successful Democratic site ActBlue that it is modeled on and the GOP's practices leading up to the 2020 election were much more unscrupulous. The refund request rate wasn't even close. In fact, "the Trump/RNC operation issued more online refunds in *December 2020* than the Biden/DNC operation issued in all of 2019 and 2020." But then WinRed itself is a product of Trump-affiliated henchmen who made their platform for profit, unlike the non-profit Act Blue, and even kept their fees when people demanded a refund which Act Blue does not. They made a lot of money on this scheme.
The sheer number of refunds to Trump donors amounted to a huge no-interest (and profitable for WinRed) loan to the campaign — a loan which required that the people loaning the money go to a great deal of trouble get money back which they didn't consciously agree to "loan" in the first place. Trump's post-election "Stop the Steal" fundraising at least partially went to pay off those "loans" from the campaign making the whole scheme very "Ponzi-esque."
It wasn't just the Trump campaign that did this. GOP candidates who used WinRed all used the same tactics including the Republicans in two Senate runoff campaigns in Georgia. There were many many requests for refunds of donations to both Kelly Loeffler and David Perdue, the Times reported.
For his part, Trump is still doing it. He's been telling his supporters not to send money to the RNC and to send it to his Save America PAC where he can do pretty much anything he wants with the money. The PAC uses WinRed. Anyone who decides they want to throw money into that black hole should read the fine print very carefully. They could be signing up to give the billionaire Donald Trump a weekly donation for life.
Amazon.com founder and CEO Jeff Bezos. (photo: Getty)
Labor Board Finds Amazon Illegally Fired Activist Workers
Annie Palmer, CNBC
Palmer writes: "The National Labor Relations Board (NLRB) has found Amazon illegally retaliated against two of its most outspoken internal critics when it fired them last year."
READ MORE
The Fox News Channel headquarters in New York. (photo: Drew Angerer/Getty)
Israeli Snoop-for-Hire Posed as a Fox News Journalist for a Spy Operation
Adam Rawnsley, The Daily Beast
Rawnsley writes: "Operatives from an Israeli private investigations company posed as a Fox News journalist and an Italian reporter in an attempt to dig up dirt on lawsuits against the emirate of Ras Al Khaimah in the UAE, The Daily Beast can reveal."
In early 2020, individuals masquerading as a Fox News researcher and a reporter for Italy’s La Stampa newspaper approached two men involved in litigation against Ras Al Khaimah, one of the seven emirates that make up the United Arab Emirates.
The impostors sought to trick their targets into revealing information about their feuds with the emirate’s leadership and learn more about lawsuits against it.
After The Daily Beast shared information about the impersonators with Facebook’s security team, Facebook took action and was able to attribute two of the bogus personas they used to an Israel-based “business intelligence” firm named Bluehawk CI.
Bluehawk CI was founded by Guy Klisman, a former Israeli military intelligence officer, and describes itself as an intelligence firm staffed by “alumni from special units in the Israeli intelligence community.” The firm’s website boasts its “specialization is in litigation support by providing answers to complex queries” and markets services from “social engineering & PR campaign management” to cybersecurity services for its clients.
Bluehawk CI did not respond to email requests from The Daily Beast requesting comment.
Facebook’s attribution of the intelligence-gathering effort to a private company highlights the legal and ethical challenges surrounding the intelligence-for-hire industry, which offers a range of services to deep-pocketed clients.
The impersonators first reached out to Oussama El Omari, an American citizen who had worked as the CEO of the Ras Al Khaimah Free Trade Zone and sued the emirate in 2016 for what he says was a lump-sum payment owed to him at the end of his service as part of his contract. The suit was subsequently dismissed in 2017.
In February 2020, “Samantha” emailed El Omari and introduced herself as “a journalist and researcher at the FOX news channel in New York” who was interested in writing about “the many cases of immigration and detention between the borders of the Emirates and the Arab Peninsula.”
Samantha’s English seemed clumsy (she wrote that she was “conducting a research about the situation in the Emirates”), but it appeared as though she had reached out from a legitimate Fox News email address. She was familiar with El Omari’s case so he welcomed her outreach, thinking it could bring some press attention to his feud with the leadership of Ras Al Khaimah.
El Omari had enjoyed his career at Ras Al Khaimah’s free trade zone, where he’d worked alongside Faisal bin Saqr al Qassimi, a member of Ras Al Khaimah’s royal family. But in court papers, El Omari said he found himself “caught in a royal family conflict and power play” when Faisal’s father, Saqr, the former ruler of Ras Al Khaimah, passed away and Faisal’s brother, Saud, took the throne.
In his 2016 lawsuit, El Omari claimed that after Faisal was removed from the free trade zone’s leadership, he was unjustly fired from the organization “and remains today, persecuted in the RAK Rulers Court, in absentia, without due process of law,” according to a complaint in the suit. A 2015 conviction against El Omari in Ras Al Khaimah for embezzlement, he has claimed, was a politically motivated false charge.
“Samantha” was familiar with El Omari’s legal fight, and in a Skype interview with him, projected sympathy. “I want really to uncover the wrongdoing in all aspect I can really, you know, find,” she said in stilted, heavily accented English. Hiding behind a Fox logo, she pumped him for information on his “knowledge about allegations found in three lawsuits” against Ras Al Khaimah and the parties involved, according to a lawsuit filed by El Omari in March 2020.
Then, just as quickly as she appeared, “Samantha” ceased to exist. The Fox News email address she had used to contact El Omari—foxnews-middleeast.com—turned out to be a fake, unrelated to the real Fox News. A phone number listed at the bottom of her email was, in fact, Fox News’ public customer support line.
In the March 2020 suit filed by El Omari, he claimed that “Samantha” had been “based on the stolen identity of a real young woman, similar in age and appearance,” who had previously worked at the news channel.
The fake Fox reporter’s approach resembles a similar ruse in which a Facebook user pretending to be an Italian reporter approached Khater Massaad, a Lebanese-Swiss citizen who had worked as the chief of Ras Al Khaimah’s sovereign wealth fund, RAKIA, until he left in 2012. In 2015, a court in Ras Al Khaimah convicted Massaad in absentia of embezzlement from RAKIA and accused him of having pocketed millions from the organization.
Like El Omari, Massaad has claimed that the charges against him were false and politically motivated—a result of Ras Al Khaimah viewing him as an ally of Faisal, and an opponent to the emirate’s current government.
The fake Italian reporter approached Massaad via Facebook message asking to discuss his relationship with the government of Ras Al Khaimah. Massaad did not engage with the attempt. As it turns out, he had good reason not to: The curious Italian reporter was an impostor linked to Bluehawk CI.
Bluehawk CI has few traces online. Its CEO, Guy Klisman, lists himself as a 25-year Israeli military intelligence veteran, and has been referred to as a “former cyber spy.” Another potential employee described himself as Bluehawk CI’s “UAE regional business development manager,” writing that he was “an expert in the UAE and the Arabian Gulf” as well as a veteran of the Israeli military’s much-respected cybersecurity and signals intelligence outfit, Unit 8200.
What’s less clear is who may have hired Bluehawk CI to carry out the campaign, and why. El Omari filed a lawsuit in federal court alleging that employees at a range of firms retained by Ras Al Khaimah were behind the fake Fox reporter. In court, the defendants have all denied El Omari’s allegations that they had anything to do with the hoax.
El Omari and Massaad’s run-ins with fake personas highlight what critics of Ras Al Khaimah say are the obstacles they face when trying to sue the emirate in court.
In a separate incident, a fake philanthropist reached out to Radha Stirling, an attorney who has represented both El Omari and Massaad in court cases involving Ras Al Khaimah. Unlike the fake reporters linked to Bluehawk CI, it’s unclear who was responsible for this attempt as there is insufficient evidence to attribute it to any specific actor. But the incident, which involved a crude attempt to hack the attorney’s phone, shows the lengths that some are apparently willing to go to seek information about lawsuits against Ras Al Khaimah.
Last year, “Justine Dutroux” showed up in Stirling’s inbox and introduced herself as an assistant to a wealthy philanthropist, hinting that she might be interested in funding Stirling’s work on cases involving Ras Al Khaimah (RAK).
Stirling, however, was suspicious from the start.
“They were very keen for me to give them information pertaining to which ‘players’ I was in contact with, within the various lawsuits involving RAK,” she told The Daily Beast. “They asked if I could establish contacts who are currently in RAK, close to the royal family, that I could introduce to them. In other words, they wanted me to oust those who may be traitors.”
“Justine” had other interests, too. Specifically, she was curious about Haya bint Hussein, the Jordanian princess who married the ruler of Dubai, Sheikh Mohammed bin Rashid Al Maktoum, in 2004 but left the UAE and her husband for the U.K. two years ago, eventually filing for divorce and causing a scandal in the royal court.
“They wanted to know whether I was in touch with Princess Haya and whether I could introduce them to Lady Shackleton, Haya’s lawyer,” Stirling said.
“Justine,” according to Stirling, “wanted to know in particular about Princess Haya’s personal assistant,” whether she still worked for the princess, and if Stirling could help make an introduction to the princess and her entourage.
Throughout the conversations, “Justine” used the lure of money as bait to gain Stirling’s confidence. She offered Stirling a private jet trip to Morocco to meet with her employer and asked her to send an invoice for payment.
And then the conversation took an altogether more sinister turn. Screenshots reviewed by The Daily Beast show that “Justine” sent Stirling two apps labeled “PaymentsApp” and “CapitalControl” through WhatsApp, explaining that the apps would allow her to monitor the billionaire’s payments to her firm and make future payments easier.
The programs would have done nothing of the sort. The Daily Beast shared the two applications with the University of Toronto’s Citizen Lab, a research organization focused on the intersection of human rights and cybersecurity, for analysis.
“This is remote access malware built on the publicly available Metasploit framework”—a cybersecurity site that produces a range of malicious software available to researchers—John Scott-Railton, a senior researcher at Citizen Lab, told The Daily Beast. He explained that the malware sent to Stirling is “not at all sophisticated, but if the social engineering works, then it would be a viable way to monitor somebody.”
In this case, it wasn’t. The hackers had mistakenly sent malware designed for an Android operating system to an iPhone, where it wouldn’t have worked.
But Stirling’s suspicions had still served her well. As “Justine” dangled money and malware, she quietly reached out to cybersecurity experts who helped her embed a script inside a document which, when opened, reached out to a server, giving her team the IP address of the computer which had opened the file.
The code, known as a “canary token,” showed that the file was opened at least three times—twice from computers connected to IP addresses in Australia and once on a computer connected to an Israeli IP address.
Stirling, a citizen of the U.S., U.K., and Australia, says she is determined to find out who was behind the attempt to hack her.
“We are ensuring that those responsible are held to full account,” she told The Daily Beast in a text exchange. “Hacking is a serious crime, and it’s important that the FBI take such crimes against U.S. citizens seriously.”
President Joe Biden pledged during the 2020 campaign to scrap the Trump administration's changes to Title IX. (photo: Evan Vucci/AP)
Biden Administration Announces Next Steps in Overhauling Title IX Campus Sexual Assault Rules
Tyler Kingkade, NBC News
Kingkade writes: "The Education Department announced plans Tuesday to hold a public hearing on how schools ought to handle sexual misconduct cases as the first step in a planned overhaul of Title IX regulations."
The government is planning multiple opportunities for the public to weigh in on how schools should respond when a student reports sexual misconduct.
In a letter released by the Education Department, the hearing is described as a chance for students, parents, school officials and advocates to weigh in before the Biden administration offers its proposal for how K-12 schools and colleges receiving public funding must respond to allegations of sexual assault and harassment. The department has not yet announced a timeline for the hearing but plans to share more details in the coming weeks. The hearing will occur over multiple days and include a virtual component, a department official said.
After the hearing, the department intends to begin a formal process known as "proposed rule-making" to rewrite the Title IX rules, which would include another round of public comments.
The department will also issue question-and-answer-style guidance in the coming weeks to advise schools how to adhere to the current Title IX rules.
During the presidential campaign, Joe Biden vowed to scrap the Trump administration's new regulation on campus sexual misconduct, which took effect in August under Title IX, a gender equity law. Former Education Secretary Betsy DeVos had said she had designed the new rules to offer a clearer, fairer process to adjudicate sexual assault complaints; victims' rights advocates criticized the regulation for narrowing the definition of sexual harassment and limiting the incidents schools could investigate.
Biden signed an executive order last month directing Education Secretary Miguel Cardona to review and consider rewriting the regulation.
"Today's action is the first step in making sure that the Title IX regulations are effective and are fostering safe learning environments for our students while implementing fair processes," Cardona said in a statement Tuesday morning.
Cardona has not indicated the specific policies the Biden administration intends to propose or change.
Democratic lawmakers and advocates for sexual assault victims had already started pressuring the Biden administration to quickly act on changing the Title IX rules. Some welcomed Tuesday's announcement.
"This is a critical next step in protecting survivors in school and ensuring Title IX's promise of ending sex discrimination is realized," said Fatima Goss Graves, president of the National Women's Law Center, a nonprofit advocacy group. "So I'd see this step as a victory and a testament to the student survivors who have continued to so bravely fight for campuses where they can be safe and treated fairly and with dignity."
Federal rule-making can be a lengthy process — sometimes taking over a year — but it is more lasting than executive orders or policy statements and more difficult for future administrations to reverse. Under DeVos, the Education Department used the same rule-making process to set up the current Title IX regulation on campus sexual misconduct.
The framework implemented by DeVos prevents schools from launching Title IX investigations into allegations of assaults that take place off campus, uses a narrower definition of sexual harassment compared to workplace standards and requires schools to presume that accused students are innocent at the outset of investigations.
DeVos' rules were widely condemned by victims' rights advocates, who said some elements, such as requiring colleges to allow accused students to cross-examine their accusers through third parties, would discourage people from reporting assaults. Many trade groups for K-12 schools and universities were also critical, arguing that the rules would turn their institutions into courtrooms.
Advocates for accused students praised DeVos' policies as ensuring evenhanded responses to assault allegations on campuses. The Foundation for Individual Rights in Education, a nonprofit that focuses on due process on college campuses, said last month that it would not rule out suing to block a Biden administration rewrite of Title IX rules.
A facial recognition system for law enforcement at a technology conference in Washington. (photo: Saul Loeb/AFP)
Surveillance Nation: Employees at Law Enforcement Agencies Across the US Ran Thousands of Clearview AI Facial Recognition Searches
Ryan Mac, Caroline Haskins, Brianna Sacks and Logan McDonald, BuzzFeed
controversial facial recognition tool designed for policing has been quietly deployed across the country with little to no public oversight. According to reporting and data reviewed by BuzzFeed News, more than 7,000 individuals from nearly 2,000 public agencies nationwide have used Clearview AI to search through millions of Americans’ faces, looking for people, including Black Lives Matter protesters, Capitol insurrectionists, petty criminals, and their own friends and family members.
BuzzFeed News has developed a searchable table of 1,803 publicly funded agencies whose employees are listed in the data as having used or tested the controversial policing tool before February 2020. These include local and state police, US Immigration and Customs Enforcement, the Air Force, state healthcare organizations, offices of state attorneys general, and even public schools.
In many cases, leaders at these agencies were unaware that employees were using the tool; five said they would pause or ban its use in response to questions about it.
Our reporting is based on data that describes facial recognition searches conducted on Clearview AI between 2018 and February 2020, as well as tens of thousands of pages of public records, and outreach to every one of the hundreds of taxpayer-funded agencies included in the dataset.
The data, provided by a source who declined to be named for fear of retribution, has limitations. When asked about it in March of this year, Clearview AI did not confirm or dispute its authenticity. Some 335 public entities in the dataset confirmed to BuzzFeed News that their employees had tested or worked with the software, while 210 organizations denied any use. Most entities — 1,161 — did not respond to questions about whether they had used it.
Still, the data indicates that Clearview has broadly distributed its facial recognition software to federal agencies and police departments nationwide, offering the app to thousands of police officers and government employees, who at times used it without training or oversight. Often, agencies that acknowledged their employees had used the software confirmed it happened without the knowledge of their superiors, let alone the public they serve.
Such widespread use of Clearview means that facial recognition may have been used in your hometown with very few people knowing about it.
In a statement to BuzzFeed News, Hoan Ton-That, the company’s cofounder and CEO, said it was “gratifying to see how quickly Clearview AI has been embraced by US law enforcement.” He declined to answer more than 50 detailed questions about the company's practices and relationships with law enforcement agencies.
“Americans shouldn’t have to rely on BuzzFeed to learn their local law enforcement agencies were using flawed facial recognition technology,” Sen. Ron Wyden, an Oregon Democrat, told BuzzFeed News. “This report pulls back the curtain on Clearview’s shady campaign to encourage the secret adoption of its service by local police. Inaccurate facial recognition will inevitably cause innocent people to be wrongly accused and convicted of crimes and could very well lead to tragedies.”
For years, law enforcement agencies have experimented with facial recognition, a technology that promises to help identify people of interest by matching surveillance photos to known images — such as a headshot from a driver’s license or passport. But there are several barriers to its adoption, including high costs, unreliable results, and public opposition.
Clearview has pushed its technology into the mainstream with a product it claims is both more accurate and cost-effective than those of its competitors. For a time, it made the tool accessible via a free trial to almost any law enforcement officer who wanted to sign up.
“I found this site on a law enforcement web site last year, I set up an account to see if it worked,” Adrian Williams, police chief in Wilson’s Mills, North Carolina, wrote to BuzzFeed News in December 2020.
If you have information about Clearview AI, or other facial recognition technology used by law enforcement, please email us at tips@buzzfeed.com. Or, to reach us securely, see this page.
Williams said he had tested Clearview AI using his own personal photos, but the software returned no matches. “I ran two known persons to see if they came back with any useful info. I didn’t think it worked the way the ad said it would,” he said.
The New York City–based startup claims to have amassed one of the largest-known repositories of pictures of people’s faces — a database of more than 3 billion images scraped without permission from places such as Facebook, Instagram, and LinkedIn. If you’ve posted images online, your social media profile picture, vacation snapshots, or family photos may well be part of a facial recognition dragnet that’s been tested or used by law enforcement agencies across the country.
Data analyzed by BuzzFeed News indicated that individuals at some 1,803 agencies — which were all contacted and asked whether they had ever used Clearview — ran almost 340,000 searches. Based on conversations with people who have used the software, a Clearview search may include a demonstration scan at a trade show, a police officer looking up a colleague as a test, or an actual investigative attempt to find a person of interest. While this database represents only a snapshot of Clearview’s reach as of February 2020, it provides unprecedented insight into how facial recognition technology can spread through law enforcement agencies across the country.
Smaller police departments were among Clearview’s earliest users. Officials in Mountain Brook, Alabama, which has a population of about 20,000, tested the product and ran nearly 590 searches. In Illinois, the secretary of state’s office ran nearly 8,900 searches, telling BuzzFeed News that the software was used “to assist other law enforcement agencies in their investigations.”
A police department crime analyst in Leawood, Kansas, tried Clearview thanks to a recommendation from a peer, only to discover another detective had already been using the software, a spokesperson said. The two officers ran 50 searches before the department decided it didn’t want to buy the product. The Wyoming Highway Patrol gave Clearview a go for about three weeks last February, Sgt. Jeremy Beck said.
Even representatives at Washington’s Department of Fish and Wildlife and Minnesota’s Commerce Fraud Bureau confirmed to BuzzFeed News that individuals within their organizations tested the software. A spokesperson for Iowa’s public defender’s office said people used it “to better understand how law enforcement might use it.”
Following an inquiry from BuzzFeed News, the Washington National Guard’s Counterdrug Program found that an instructor at its Western Regional Counterdrug Training Center had not only registered to use Clearview’s facial recognition software without permission but had also incorporated it into the curriculum for an officer training course. This course was open to “federal, state, local and tribal law enforcement,” per emails obtained by BuzzFeed News through a public records request.
“Any mention of this software has now been stripped from our training materials,” said Karina Shagren, communications director for Washington’s military department. “And we’re now conducting a top-down review of all training programs.”
Clearview’s promotional materials and communications with law enforcement agencies were obtained via more than 140 public records requests from BuzzFeed News. The documents detail the startup’s flood-the-market strategy, in which it hawked free trials of its technology to seemingly anyone with an email address associated with the government or a law enforcement agency and told them to “run wild." The data shows that this marketing push put an unproven and federally unregulated facial recognition tool into the hands of people associated with taxpayer-funded agencies in the District of Columbia, the US Virgin Islands, and every state except Vermont.
After March 2020, according to emails obtained via a public records request, Clearview placed a few checks on its free trial program, including the requirement of a superior's approval and appointment of an administrator to monitor use. Clearview’s website was also updated last month to state that officers have to provide a case number before conducting a search.
But between summer 2019 and February 2020, none of these checks existed. Any officer could sign up, and Clearview explicitly encouraged them to test its software on friends and family members.
“No strings attached,” a November 2019 email to police in Lakeland, Florida, reads. “It only takes one minute to install and you can start searching immediately.”
In a statement to BuzzFeed News, Ton-That said that in two years the company had helped “thousands of agencies” solve crimes including “child exploitation, financial fraud, and murder,” but did not provide specific examples when asked.
“As a young startup, we’re proud of our record of accomplishment and will continue to refine our technology, cybersecurity, and compliance protocols,” he said. “We also look forward to working with policymakers on best practices to forge a proper balance between privacy and security that serves the interests of families and communities across America.”
Created by Ton-That, an Australian-born college dropout who traveled in far-right circles, Clearview AI debuted in 2017 as SmartCheckr, a tool for tracking people across disparate social media platforms. With funding from Facebook board member Peter Thiel, the company changed its name a year later when it began focusing on facial recognition.
Clearview has touted its software as the “world’s best facial-recognition technology,” but its most novel innovation is doing what no other company has been willing to do: rip billions of personal photos from social media and the web without permission or consent.
In his statement, Ton-That compared Clearview to a search engine and said the facial recognition tool only searches public information available on the internet. “The work we do is fully protected by the First Amendment and complies with all applicable laws,” he said.
Critics, including the American Civil Liberties Union, disagree.
“Protecting privacy means maintaining control of private information that is most revealing of our identities, our activities, our selfhood, and Clearview’s business is based on taking that control away from people without their consent,” Nathan Freed Wessler, a senior staff attorney with the ACLU, told BuzzFeed News. “You can change your Social Security number if it is taken without your consent, but you can’t change your face.”
Last May, the ACLU sued Clearview for allegedly violating an Illinois law overseeing the collection of biometric data by private companies. The company is also facing multidistrict litigation for the sale of its technology in Illinois — an alleged violation of a state biometric privacy law — a suit from the Vermont attorney general, and a suit from Latinx and Chicanx rights group Mijente. Facebook, LinkedIn, Google, and Twitter have all sent Clearview cease-and-desist letters alleging that the company violated its terms of service by scraping people’s data. (All four companies declined to say if they have plans to take further legal action against Clearview.)
None of this has hampered Clearview’s aggressive marketing efforts toward law enforcement. The company regularly promoted itself to email lists of officers with claims that its software can scan “over 1 billion faces in less than a second" and that it is “100% accurate across all demographic groups."
This strategy has worked brilliantly. By February 2020, almost 2,000 taxpayer-funded entities and police departments across the US had at least one person run at least one search with Clearview, according to data reviewed by BuzzFeed News. The company assured those users in marketing emails that the more searches they ran, the more likely they were to match a suspect.
Wessler condemned the company’s marketing.
“Their claims of near-perfect identification have never been substantiated, and the pervasive tracking of our faces and whereabouts with a flawed technology is just too dangerous to have in the hands of the government,” he said.
In recent years, Clearview reached hundreds of law enforcement agencies by using a sales strategy commonplace among software companies like Slack or Dropbox. Rather than rely on standard procurement channels to make a sale, it also targeted individual employees with free trials. That allowed Clearview to create internal demand in a bottom-up manner with the hope that users would advocate within their departments to move to a paid version.
These free trials have helped Clearview create a broad swath of connections with local police officers; Ton-That recently claimed that 3,100 law enforcement agencies have used the software as of March 2021. Listed in the data BuzzFeed reviewed, for example, were more than 40 individuals in the New York Police Department who had collectively run over 11,000 searches — the most of any entity as of February 2020. The NYPD announced new facial recognition policies in March 2020 following a BuzzFeed News story that detailed how it used the software.
The New York State Police ran more than 5,100 searches and used the software “to generate potential leads in criminal investigations as well as homeland security cases involving a clearly identified public safety issue,” according to Beau Duffy, director of public information. The data lists other police departments that have run more than 4,000 searches as of February 2020: Bergen County Sheriff’s Office in New Jersey (more than 7,800), Indiana State Police (more than 5,700), Broward County Sheriff’s Office in Florida (more than 6,300), and Jefferson Parish Sheriff’s Office in Louisiana (nearly 4,200). Indiana State Police confirmed its use of Clearview. The Broward County and Jefferson Parish offices did not respond to multiple requests for comment. A Bergen County Sheriff spokesperson denied the agency used Clearview.
According to data reviewed by BuzzFeed News, individuals at 15 different state attorneys general offices tried the software, including those in Texas, Alabama, and New Jersey, which banned its own law enforcement agencies from using it in January 2020. The Texas attorney general’s office did not respond to multiple requests for comment, while a spokesperson for the Alabama attorney general’s office said it “had no contracts with Clearview AI.”
When BuzzFeed News notified the California attorney general’s office of accounts tied to its employees, a spokesperson said the state’s Department of Justice had “not authorized the use of facial recognition software.” The department subsequently blocked access to Clearview’s website on employees’ devices, the spokesperson added, “out of an abundance of caution.”
Individuals associated with public schools have also apparently tried the company’s facial recognition software. As of early 2020, data reviewed by BuzzFeed News lists 31 public community colleges and universities — including the University of Alabama and West Virginia University, neither of which responded to multiple requests for comment. A spokesperson for California State University, Long Beach, confirmed a detective had reviewed the platform’s capabilities on a 30-day trial.
Records seen by BuzzFeed News show that individuals associated with two high schools — Somerset Berkley Regional High School in Massachusetts and Central Montco Technical High School in Pennsylvania — and Texas’s Katy Independent School District appear to have run searches.
Officials at Central Montco Technical High School did not respond to requests for comment. A spokesperson for the Somerset Police Department, which is tied to Somerset Berkley Regional High, according to data seen by BuzzFeed News, confirmed that a detective had used Clearview on a 30-day trial. A spokesperson for Katy Independent School District’s police department said that the agency does not use facial recognition software and did not answer follow-up questions.
In marketing statements, Clearview claims its software has been used to help identify child predators. And to some extent these claims were born out in BuzzFeed News’ reporting.
ICE — as well as police in Mount Pleasant, Wisconsin, and Raleigh, North Carolina — told BuzzFeed News the software had been used in such cases. “Clearview AI is most often used to investigate formidable crimes that are extraordinary in nature, such as reports of human trafficking and shootings,” said Laura Hourigan, a spokesperson for the Raleigh Police Department. “These searches are fairly narrow in their scope, are limited, and are focused specifically on what they are looking for at that time.”
Ton-That told BuzzFeed News that law enforcement agencies have also used Clearview to identify insurrectionists who stormed the US Capitol on Jan. 6, though he declined to provide specifics. Data seen by BuzzFeed News shows that the US Capitol Police used Clearview to run more than 60 searches as of early 2020.
It’s unclear if the Capitol Police, the main law enforcement agency tasked with protecting Congress, still has access to the facial recognition tool. A spokesperson did not respond to multiple requests for comment.
Clearview’s free trials, most of which give users unlimited searches for 30 days, may have also helped put its software into the hands of employees at many of the nation’s largest law enforcement agencies. Individuals associated with the departments of Justice, Defense, and State have all apparently tried the facial recognition software, according to data reviewed by BuzzFeed News. Spokespeople for those departments declined or did not respond to requests for comment.
The data lists people at the five largest branches of the US military — the Army, Air Force, Navy, Coast Guard, and Marines — as having used the company’s software. The US Army Criminal Investigation Command, which pursues violations of military law, had run more than 1,300 searches as of February 2020, according to the data. In December 2019, the Air Force signed a $50,000 exploratory contract with Clearview “to determine if there was an operational need,” according to a branch spokesperson. The Army, Navy, and Air Force did not respond to multiple requests for comment. A spokesperson for the Marine Corps said there was no indication that anybody at the service branch had used Clearview. A spokesperson said the Coast Guard “does not use Clearview AI.”
If you have information about Clearview AI, or other facial recognition technology used by law enforcement, please email us at tips@buzzfeed.com. Or, to reach us securely, see this page.
An FBI spokesperson declined to comment on the agency’s investigative tools and techniques, though records seen by BuzzFeed News list individuals at more than 20 bureau offices as having run over 5,800 Clearview searches as of early 2020. Those same records show that employees at US Customs and Border Protection as having registered more than 270 accounts and run nearly 7,500 searches, the most of any federal agency that did not have a contract with Clearview at the time.
Asked how the agency uses the software in its policing work, a CBP spokesperson told BuzzFeed News it “does not use Clearview AI for its biometric entry-exit mission." The spokesperson did not answer further questions.
Since BuzzFeed News first reported last year on ICE’s more than 8,000 searches through Clearview, the agency has expanded its relationship with the facial recognition provider. ICE has said publicly that it “does not routinely use facial recognition technology for civil immigration enforcement,” but it signed a $224,000 contract with Clearview last August.
Clearview’s strategy of handing out free trials meant that its facial recognition software was often used without any oversight. Officials at 34 public entities told BuzzFeed News that they had found police officers or other public servants using Clearview without approval.
Even more concerning, representatives at 69 law enforcement and taxpayer-funded entities initially denied to BuzzFeed News that their employees had used the software — but after further examination, discovered that they had.
Police officials in Chula Vista, California, for example, were adamant that their department did not use any facial recognition technology in its work. “Our officers can’t sign up for something like that on their own,” Eric Thunberg, a captain with the organization’s investigations division, told BuzzFeed News in November.
But after a more thorough search, Thunberg determined that they had. A “small number” of officers signed up for a free trial in 2019 and used the software to investigate threats — like “a photo of a kid holding a gun or weapon” — against dozens of schools in their jurisdiction, he said. “Absent of your inquiry, we never would have known about it.”
Similarly, the Tacoma Police Department in Washington initially denied using Clearview before discovering that an officer “in an investigative capacity” ran nearly 800 searches during a free trial that lasted from November 2019 to November 2020. Spokesperson Wendy Haddow noted that “the officer said there were no arrests made that he is aware of from the searches.”
The Los Angeles Police Department, the nation’s third-largest police agency, banned the use of commercial facial recognition in November following an inquiry from BuzzFeed News about the nearly 475 Clearview searches that officers ran as of early last year. John McMahon, the deputy police chief who oversees the department’s IT division, confirmed that the more than 25 officers and investigators who used the software did so in ways that were “not authorized.” The department declined to answer further questions.
In Alameda, California, BuzzFeed News found that police officers continued to use Clearview after elected leaders voted to ban the technology in December 2019. In the months before that vote, an Alameda police officer warned Clearview cofounder Richard Schwartz that the department was facing “an uphill battle” to approve a paid contact for the software. Schwartz’s reply, in an August 2019 email obtained via a public records request, denounced the “anti-facial-recognition narrative” and touted Clearview as a “state-of-the-art investigative tool for law enforcement that is super-accurate and 100% unbiased.”
He added, “Are they really going to let politics and deliberately misleading reports prevent you from using a life-saving tool like Clearview?” (The company declined to answer questions about Schwartz’s communications with Alameda police officers.)
When Alameda became the fourth city in California to ban the use of facial recognition in December 2019, some officers apparently did not heed the directive. Records seen by BuzzFeed News show that Alameda police officers — who ran nearly 550 searches in data — continued to use Clearview at least until February 2020, unbeknownst to city officials. The city manager and city council members, who told BuzzFeed News that the officers’ use of the software has been hard to track because of its free trials, are now investigating the matter.
“Never in my job would I ever think, Oh, I wonder if I can use this and not check it with a higher authority,” John Knox White, a member of the Alameda City Council, told BuzzFeed News. “If something is controversial, we should check in with the city attorney's office; key decision-makers should be involved to make sure there's no problem.”
He added, “That we have emails showing police used this technology after an actual vote saying you can’t do this is extremely troubling.”
Ton-That is an outspoken advocate for Clearview, parroting the company's marketing claims in print and TV interviews. In February 2020, he said on CNN that Clearview was “99% accurate” and compared its achievements to the breaking of the sound barrier. “It’s gotten to the point where it works better than the human eye,” he said of the facial recognition software.
But these claims have not been vetted by an independent third party, and Clearview offers no research to support them. The company’s marketing materials claim that its recognition software is 98% accurate per 1 million faces, according to the benchmark created by the University of Washington's MegaFace dataset; however, this has never been independently verified by the school. Ton-That declined to provide evidence that the technology had been reviewed by a third party but said that the company “plans to participate” in tests “to further validate the accuracy and reliability of Clearview AI.”
The University of Washington did not respond to requests for comment. Meanwhile, representatives from multiple law enforcement agencies told BuzzFeed News that they had opted not to purchase Clearview subscriptions because the software either did not work as well as they’d expected or did not meet department standards.
Dennis Natale, a detective with the Melrose Park Police Department in Illinois, which ran more than 120 searches as of early 2020, said that Clearview “didn’t directly affect” any of his cases. Lt. Jon Bowers of the Fort Wayne Police Department in Indiana told BuzzFeed News he didn’t recall any instance in which Clearview “was a lynchpin” in closing a case; he noted that his department pays for the software and uses it infrequently because it rarely returns matches on searches.
Shawn Vaughn, public information officer for the police department in Texarkana, Texas, which had run more than 280 Clearview searches as of February 2020, said the agency stopped using the software after a few months because of concerns “about its reliability and privacy issues.”
“As far as any claims of 100% accuracy by a product, it's been my experience that those should immediately be viewed as suspect,” he said.
Part of the problem, officers explained, is that Clearview doesn’t seem to work well with the low-resolution photos and grainy surveillance footage that are so common in police work. Bowers said that Fort Wayne police officers used Clearview to identify alleged rioters during a Black Lives Matter protest in late May 2020, but only because the event was widely documented in pictures and video — something that is not true of the typical crime scene.
“The quality is often not good enough to put it in Clearview and find the [suspect in the] armed robbery from Friday night,” he said, noting that the department was reconsidering its $4,000 yearlong contract with the company.
Historically, concerns about the accuracy of facial recognition systems have been particularly acute when it comes to scrutinizing the faces of nonwhite people. Technology developed by other companies has been plagued by claims of racial bias and false identifications, leading to innocent people being accused of crimes. In 2018, the ACLU reported that Amazon’s facial recognition system incorrectly matched 28 members of Congress to people who had been arrested for crimes. There are now at least three known instances of people being jailed after being falsely identified by other facial recognition tools. All three were Black men.
Last June, in light of racial justice protests and further scrutiny on potential bias in facial recognition, Microsoft placed a temporary moratorium on the sale of its technology to police departments "until there is a strong national law grounded in human rights." Amazon did as well for a year, saying it wanted to “give Congress enough time to implement appropriate rules.”
“The technology is not built to identify Black people,” Mutale Nkonde, CEO of the nonprofit communications group AI for the People, told BuzzFeed News. She noted that when facial recognition tools misidentify people of color, it amplifies dangers they already face from implicit racial bias.
Ton-That has repeatedly said that Clearview is immune to such errors. In a statement to BuzzFeed News, he reiterated this claim and said that “As a person of mixed race, ensuring that Clearview AI is non-biased is of great importance to me."
He added, “Based on independent testing and the fact that there have been no reported wrongful arrests related to the use of Clearview AI, we are meeting that standard.”
However, Clearview did not provide any information about that testing, despite repeated inquiries. Furthermore, while BuzzFeed News found no evidence of wrongful arrests, we learned of false positives, suggesting the software is susceptible to similar problems.
Detective Adam Stasinopoulos of the Forest Park Police Department in Illinois, which stopped using Clearview after its free trial expired, said he saw false positives in search results within the facial recognition app. “I know that there were matches that weren’t exactly accurate,” Stasinopoulos said. BuzzFeed News reviewed documents in a Forest Park arrest in which Clearview correctly identified a suspect, a Black woman, in the theft of an engagement ring, yet it also matched her image with three other Black individuals who were not involved in the crime.
In early 2020, a source with access to the Clearview AI mobile app conducted a series of searches on behalf of BuzzFeed News. That person, who asked not to be identified for fear of retribution, ran more than 30 searches using a group of images including several photos of computer-generated faces.
On two searches for computer-generated faces — a woman of color and a girl of color — Clearview returned images of real people as matches. For example, it matched an artificial face to an image of a real girl of color in Brazil, whose school had posted her picture to Instagram. For searches of computer-generated white faces, no false matches occurred.
Clearview declined to respond to questions about these issues.
In Clearview contracts seen by BuzzFeed News, the company warns paying agencies that matches should not be the sole basis for making an arrest. But experts worry that the company’s unproven claims of accuracy could encourage investigators to use a possible match and develop cases they wouldn’t otherwise pursue. Facial recognition software is so new that lawmakers and the public are only beginning to grapple with how it is used.
Sen. Chris Coons, a Delaware Democrat and the chair of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, told BuzzFeed News that as facial recognition becomes more widespread, local and federal authorities need to enforce restrictions.
“As this reporting shows, we have little understanding of who is using this technology or when it is being deployed,” he said. “I’m also deeply concerned by evidence that facial recognition technologies can be error-prone or demonstrate harmful biases, especially when used on people of color. Without transparency and appropriate guardrails, this technology can pose a real threat to civil rights and civil liberties.”
Police can easily abuse and weaponize facial recognition tools like Clearview’s, Joy Buolamwini, founder of the Algorithmic Justice League, told BuzzFeed News, regardless of whether the tool is accurate.
“We have to take the conversation beyond questions of accuracy or algorithmic bias and start focusing on questions of algorithmic harms and civil rights,” she said. “When these systems are used as tools of mass surveillance, the excoded, those who are harmed by algorithmic systems, suffer the brunt of overpolicing, undue scrutiny, and false arrests.”
And then there is the issue of transparency. Michael Gottlieb, a lawyer for a racial justice protester who was arrested in Miami in May, learned Clearview had been used to identify his client as a suspect only after a local NBC affiliate station reported it. The Miami Police Department did not return multiple requests for comment.
Similarly, for those who were arrested during Black Lives Matter protests in Fort Wayne, Indiana, the police department there declined to disclose who had been identified with the assistance of Clearview.
Emails between police officers from late 2019 obtained through a public records request show some discussing whether to omit mention of Clearview AI in official reports.
“When you write reports about identifying suspects through Clearview AI,” one officer from Springfield, Illinois, asked a peer in Gainesville, Florida, in November 2019, “do you name the program, etc.? Or is it better to keep it vague. I wasn’t sure if there were any issues with trying to keep the information away from suspects about how we got their information.”
The Gainesville officer replied: “You can keep it general and say ‘through the use of investigative techniques. It’s not going to be totally wrong if you mention it.”
The rapid development and proliferation of commercial facial recognition has left lawmakers across the country scrambling to define digital privacy and protect our faceprints. It’s also made some police departments uneasy over deploying such technology in their communities.
Sheriff Rob Streck in Montgomery County, Ohio, didn’t mind that two of his detectives had tested Clearview on a trial basis, but he opted not to adopt the tool officially.
“I think there are still many unanswered questions about privacy and the technology," he said.
Law enforcement leaders in other states echoed Streck’s concerns. The Utah Department of Public Safety considered using Clearview AI but decided against it because it couldn’t verify how, and from where, the company was mining its faceprints, Chief Brian Redd told BuzzFeed News.
“If a company is not being transparent or not wanting to give us information as to how they are mining their data, that’s a red flag for us,” he said. “Our community also had concerns about the use of their public photos, so we shut it down.”
After testing Clearview, the Utah Department of Public Safety continued to work with state leaders on a facial recognition bill, which passed through the state legislature earlier last month. It’s not as strong as some other states’, but it does require officers to submit a written request before running a search, and only allows them to do so in a felony case or a life-or-death situation. The bill also bans state agencies from accepting free trials from private companies like Clearview.
“This is a powerful tool, and it needs accountability,” Redd said. “When it comes down to it, we didn’t want to be the only ones who have the power to decide how to use it.”
Jameson Spivack, an associate at Georgetown Law’s Center on Privacy & Technology, said that while such guardrail policies are a good first step, they’re not worth much if they are not enforced properly.
“If you were to ask law enforcement how they use face recognition, they could just say, ‘Oh, it's just an investigative lead,’ which may or may not be true,” Spivack said. “The fact of the matter is, in most cases, there's nothing holding them to this. There's, in most cases, no laws, in most cases, no internal policies. They pretty much have free rein to use it how they want.”
Russian opposition leader Alexei Navalny. (photo: AFP)
Alexei Navalny 'Seriously Ill' on Prison Sick Ward, Says Lawyer
Andrew Roth, Guardian UK
Roth writes:
Russian opposition figure has fever, cough and has lost weight, according to legal team member who visited him
lexei Navalny’s lawyer has confirmed that the opposition leader is “seriously ill” after reports emerged that he had been transferred to a prison sick ward for a respiratory illness and had been tested for coronavirus.
The Kremlin critic said in a note published on Monday that he was coughing and had a temperature of 38.1C (100.6F). Several prisoners from his ward had already been treated in hospital for tuberculosis, Navalny wrote. Hours later, the pro-Kremlin newspaper Izvestia reported he had been moved to a sick ward and tested for coronavirus, among other diseases.
On Tuesday, Russian police arrested several Navalny supporters who had travelled to the prison 60 miles east of Moscow to petition for him to receive proper medical care. Anastasia Vasilyeva, the head of the Russian Doctors’ Alliance, along with three other members of the renegade medical union were arrested. Reporters for CNN and for Belsat, a Russian-language television channel based in Poland, were also briefly detained.
“We are coming here today to offer help,” Vasilyeva told journalists before her arrest. “There’s no war here. Let’s settle this problem like people.”
A lawyer for Navalny said that a member of his legal team had seen the opposition leader on Tuesday and that he was “in rather bad condition”. Navalny declared a hunger strike last week because he had been denied a visit from a personal doctor for growing numbness and pain in his back and legs that had made it difficult for him to walk.
“He has lost a lot of weight, plus he has a strong cough and a temperature of 38.1C,” Olga Mikhailova, the lawyer, said on the Echo of Moscow radio station. “This man is seriously ill. It’s a complete outrage that the IK-2 [prison] has driven him to this condition.”
In a letter published on Monday, Navalny wrote that three inmates in his ward had been taken to hospital recently with tuberculosis. He joked darkly that if he had contracted the disease, it could distract him from “the pain in my back and numbness in my legs”.
There has not been official confirmation of Navalny’s medical treatment, although a lawyer speculated on Monday that the sick ward was probably in the IK-2 prison colony, 60 miles east of Moscow, where he is being held. The prison is notoriously strict and said to specialise in isolating prisoners from the outside world.
Navalny’s wife, Yulia, on Tuesday published a letter sent to her from the prison warden who said that he could not send Navalny to hospital because he did not have his passport. In a statement posted online, she also claimed that the warden had taunted her husband by grilling a chicken and handing out sweets to his fellow inmates while the opposition leader has maintained his hunger strike.
Navalny is serving a two-and-a-half year prison term on embezzlement charges that he has said is retribution for his political opposition to Vladimir Putin. Navalny survived a poisoning attempt that he traced back to Russia’s FSB last year. He was arrested in January when he returned to Russia from Germany, where he had been treated for poisoning with a novichok-type nerve agent.
Navalny has compared the prison colony to a “concentration camp” and complained of sleep deprivation and other psychological pressure. Last week, a pro-Kremlin activist who had been jailed on spying charges in the US visited him in the prison, telling him that he had exaggerated the poor conditions there.
“I’m tired of the complaining. He is in one of the best penal colonies in Russia,” Maria Butina, who now works for the state-funded television station RT, posted on social media. She visited the prison with a camera crew in tow.
Navalny complained about the visit in a note posted to his Telegram channel: “Instead of a doctor, today the miserable RT television propagandist [Maria] Butina came along with video cameras,” he said.
Amnesty International’s secretary general, Agnès Callamard, said on Monday that she had written to Vladimir Putin about the “arbitrary arrest and deteriorating health condition” of Navalny.
Later on Tuesday, Navalny said he had been visited by doctors representing the Vladimir region who said they would not allow him to meet with someone sent from Moscow, a decision that he said violated the law.
An obsession with catastrophe has driven much of the research into how societies responded to a shifting climate throughout history. (photo: Shana Navak/Getty)
Kate Yoder | Did Climate Change Cause Societies to Collapse? New Research Upends the Old Story.
Kate Yoder, Grist
Yoder writes: "If you're under the impression that climate change drove ancient civilizations to their demise, you probably haven't heard the full story."
opener The untold history of how people survived the past 2,000 years.
The ancient Maya, for example, didn’t vanish when their civilization “collapsed” around the 9th century. Though droughts certainly caused hardship, and cities were abandoned, more than 7 million Maya still live throughout Mexico and Central America. The Maya dealt with dry conditions by developing elaborate irrigation systems, capturing rainwater, and moving to wetter areas — strategies that helped communities survive waves of drought.
A report recently published in the journal Nature argues that an obsession with catastrophe has driven much of the research into how societies responded to a shifting climate throughout history. That has resulted in a skewed view of the past that feeds a pessimistic view about our ability to respond to the crisis we face today.
“It would be rare that a society as a whole just kind of collapsed in the face of climate change,” said Dagomar Degroot, an environmental historian at Georgetown University and the lead author of the paper. The typical stories of environmentally-driven collapse that you might have heard about Easter Island or the Mayan civilization? “All those stories need to be retold, absolutely,” he said.
Painting a more complex picture of the past — one that includes stories of resilience in the face of abrupt shifts in the climate — might avoid the fatalism and despair that sets in when many people grasp the scale of the climate crisis. Degroot himself has noticed that his students were beginning to echo so-called “doomist” talking points: “Past societies have crumbled with just a little climate change, Doomists conclude — why will we be any different?” Part of the reason people study the past, Degroot said, “is because we care about the future, and about the present, for that matter.”
Of course, the idea that a changing climate can drive collapse isn’t wrong. It’s just not the whole story. “Certainly our article did not disprove that climate changes have had disastrous impacts on past societies — let alone that global warming has had, and will have, calamitous consequences for us,” Degroot wrote in a post. Even modest changes in the climate have caused problems. And today’s planetary changes are anything but modest: The world is on track to see an alarming 3.2 degrees C (5.8 degrees F) warming by the end of this century, even if countries meet their current commitments to cut greenhouse gas emissions under the Paris Agreement.
The new paper looked at ways that societies adapted to a shifting climate over the last 2,000 years. Europe and North America endured periods of moderate cooling: the Late Antique Little Ice Age around the 6th century, and the Little Ice Age from the 13th to 19th centuries. Looking at case studies from these frigid eras, the researchers concluded that many societies responded with flexibility and ingenuity. They detail examples of people moving into different regions, developing trade networks, cooperating with others, altering their diets, and finding new opportunities.
When volcanic eruptions fueled the Late Antique Little Ice Age, for example, the Romans took advantage of a rainier Mediterranean. Settlements and market opportunities expanded as people began growing more grains and keeping more grazing animals. They built dams, channels, and pools to help farmers in more arid areas manage water, and, according to the paper, “the benefits were widespread.”
During the Little Ice Age in the 17th century, the whaling industry in Norway’s northern islands in the Arctic Ocean actually functioned more effectively during colder years. According to Degroot’s research, whalers coordinated with each other and concentrated their efforts on a limited number of days in spots where whales could be easily caught.
In what is now southeastern California, which vacillated between periods of severe drought and increased rain toward the end of the 15th century, Mojave settlements dealt with the unsteady climate by turning to regional trade. They developed new ceramic and basket-weaving techniques, trading for maize, beans, and squash produced by their southern Kwatsáan neighbors.
If stories of adaptation are so common, why aren’t they told more often? Maybe that’s because people are more interested in understanding catastrophes and why they happened, rather than ones that … didn’t. “You can imagine if you do that over and over again, then the entire field is going to focus on disaster,” Degroot said. “And that’s exactly what has happened, I think.”
In the study, an international team of archaeologists, historians, paleoclimatologists, and other experts reviewed 168 studies published on the Little Ice Age in Europe over the past 20 years. While 77 percent of the studies emphasized catastrophe, only 10 percent focused on resilience. In this context, “resilience” refers to the ability of a group to cope with hazards, responding and reorganizing without losing their core identity.
Stories of collapse are often told as parables of what happens when humans wreck things (think Noah’s Ark). The public’s interest in environment-driven collapse picked up in 2005 with the publication of Jared Diamond’s book Collapse: How Societies Choose to Fail or Succeed. Some took issue with the interpretations in the book. Take Easter Island, or Rapa Nui, the South Pacific island settled by Polynesians known for its monoliths of heads (actually, the rest of their bodies are underground). The book popularized the idea that the population crashed because the islanders slashed and burned all the trees — a cautionary tale on the perils of destroying the environment.
The new story about Rapa Nui is more complicated. In the article “The truth about Easter Island: a sustainable society has been falsely blamed for its own demise,” the archaeologist Catrine Jarman attributed deforestation to the tree-munching rats the Polynesians brought with them, and blames the population crash in the 19th century on slave raids and diseases introduced by European traders.
Recent research suggests that indigenous groups have been particularly good at adapting to climate changes, Degroot said, “either because they were able to migrate or because they were able to alter the distribution of resources that they relied upon.”
Even though many societies survived the pressures of the mini ice ages, Degroot found that resilience sometimes “is a product of one community having access to favorite resources, maybe over another.” The wealthy 17th-century Dutch, for example, imported grains from around the Baltic and then sold them for “lucrative profits” wherever the weather caused grain shortages in Europe. The lesson for today, Degroot said, is that “we need to think about building equality as a way of adapting to climate change.”
The report lays out best practices for researchers to follow when they study the history of climate and society, outlining ways to reduce biases and avoid the misuse of historical data. Following a more rigorous process may well end up unearthing more examples of people facing searing heat and dried-up wells, and still finding ways to survive. “We hope that this discourages the kind of doomist idea that the past tells us that we’re screwed,” Degroot said. “We might be! But the past does not tell us that.”
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.