Wednesday, May 3, 2023

Ross Andersen | Never Give Artificial Intelligence the Nuclear Codes

 

 

Reader Supported News
03 May 23

Live on the homepage now!
Reader Supported News

Serious Fundraising, Here and Now — Reader Supported News is surviving and maintaining services in the most challenging fundraising environment our organization has ever seen. But just barely. Today we ask for the support of all RSN readers who can afford to give. This will be a good fund-raiser. Thanks to all in advance.
Marc Ash • Founder, Reader Supported News

Sure, I'll make a donation!

 

Nuclear brinkmanship. (photo: Victor Tangermann/Futurism)
Ross Andersen | Never Give Artificial Intelligence the Nuclear Codes
Ross Andersen, The Atlantic
Andersen writes: "The temptation to automate command and control will be great. The danger is greater."   


The temptation to automate command and control will be great. The danger is greater.


No technology since the atomic bomb has inspired the apocalyptic imagination like artificial intelligence. Ever since ChatGPT began exhibiting glints of logical reasoning in November, the internet has been awash in doomsday scenarios. Many are self-consciously fanciful—they’re meant to jar us into envisioning how badly things could go wrong if an emerging intelligence comes to understand the world, and its own goals, even a little differently from how its human creators do. One scenario, however, requires less imagination, because the first steps toward it are arguably already being taken—the gradual integration of AI into the most destructive technologies we possess today.

The world’s major military powers have begun a race to wire AI into warfare. For the moment, that mostly means giving algorithms control over individual weapons or drone swarms. No one is inviting AI to formulate grand strategy, or join a meeting of the Joint Chiefs of Staff. But the same seductive logic that accelerated the nuclear arms race could, over a period of years, propel AI up the chain of command. How fast depends, in part, on how fast the technology advances, and it appears to be advancing quickly. How far depends on our foresight as humans, and on our ability to act with collective restraint.

Jacquelyn Schneider, the director of the Wargaming and Crisis Simulation Initiative at Stanford’s Hoover Institution, recently told me about a game she devised in 2018. It models a fast-unfolding nuclear conflict and has been played 115 times by the kinds of people whose responses are of supreme interest: former heads of state, foreign ministers, senior NATO officers. Because nuclear brinkmanship has thankfully been historically rare, Schneider’s game gives us one of the clearest glimpses into the decisions that people might make in situations with the highest imaginable human stakes.

It goes something like this: The U.S. president and his Cabinet have just been hustled into the basement of the West Wing to receive a dire briefing. A territorial conflict has turned hot, and the enemy is mulling a nuclear first strike against the United States. The atmosphere in the Situation Room is charged. The hawks advise immediate preparations for a retaliatory strike, but the Cabinet soon learns of a disturbing wrinkle. The enemy has developed a new cyberweapon, and fresh intelligence suggests that it can penetrate the communication system that connects the president to his nuclear forces. Any launch commands that he sends may not reach the officers responsible for carrying them out.

There are no good options in this scenario. Some players delegate launch authority to officers at missile sites, who must make their own judgments about whether a nuclear counterstrike is warranted—a scary proposition. But Schneider told me she was most unsettled by a different strategy, pursued with surprising regularity. In many games, she said, players who feared a total breakdown of command and control wanted to automate their nuclear launch capability completely. They advocated the empowerment of algorithms to determine when a nuclear counterstrike was appropriate. AI alone would decide whether to enter into a nuclear exchange.

Schneider’s game is, by design, short and stressful. Players’ automation directives were not typically spelled out with an engineer’s precision—how exactly would this be done? Could any automated system even be put in place before the culmination of the crisis?—but the impulse is telling nonetheless. “There is a wishful thinking about this technology,” Schneider said, “and my concern is that there will be this desire to use AI to decrease uncertainty by [leaders] who don’t understand the uncertainty of the algorithms themselves.”

AI offers an illusion of cool exactitude, especially in comparison to error-prone, potentially unstable humans. But today’s most advanced AIs are black boxes; we don’t entirely understand how they work. In complex, high-stakes adversarial situations, AI’s notions about what constitutes winning may be impenetrable, if not altogether alien. At the deepest, most important level, an AI may not understand what Ronald Reagan and Mikhail Gorbachev meant when they said, “A nuclear war cannot be won.”

there is precedent, of course, for the automation of Armageddon. After the United States and the Soviet Union emerged as victors of the Second World War, they looked set to take up arms in a third, a fate they avoided only by building an infrastructure of mutual assured destruction. This system rests on an elegant and terrifying symmetry, but it goes wobbly each time either side makes a new technological advance. In the latter decades of the Cold War, Soviet leaders worried that their ability to counter an American nuclear strike on Moscow could be compromised, so they developed a “dead hand” program.

It was so simple, it barely qualified as algorithmic: Once activated during a nuclear crisis, if a command-and-control center outside Moscow stopped receiving communications from the Kremlin, a special machine would inquire into the atmospheric conditions above the capital. If it detected telltale blinding flashes and surges in radioactivity, all the remaining Soviet missiles would be launched at the United States. Russia is cagey about this system, but in 2011, the commander of the country’s Strategic Missile Forces said it still exists and is on “combat duty.” In 2018, a former leader of the missile forces said it has “even been improved.”

In 2019, Curtis McGiffin, an associate dean at the Air Force Institute of Technology, and Adam Lowther, then the director of research and education at the Louisiana Tech Research Institute, published an article arguing that America should develop its own nuclear dead hand. New technologies have shrunk the period of time between the moment an incoming attack is detected and the last moment that a president can order a retaliatory salvo. If this decision window shrinks any further, America’s counterstrike ability could be compromised. Their solution: Backstop America’s nuclear deterrent with an AI that can make launch decisions at the speed of computation.

McGiffin and Lowther are right about the decision window. During the early Cold War, bomber planes like the one used over Hiroshima were the preferred mode of first strike. These planes took a long time to fly between the Soviet Union and the United States, and because they were piloted by human beings, they could be recalled. Americans built an arc of radar stations across the Canadian High Arctic, Greenland, and Iceland so that the president would have an hour or more of warning before the first mushroom cloud bloomed over an American city. That’s enough time to communicate with the Kremlin, enough time to try to shoot the bombers down, and, failing that, enough time to order a full-scale response.

The intercontinental ballistic missile (ICBM), first deployed by the Soviet Union in 1958, shortened that window, and within a decade, hundreds of them were slotted into the bedrock of North America and Eurasia. Any one of them can fly across the Northern Hemisphere in less than 30 minutes. To preserve as many of those minutes as possible, both superpowers sent up fleets of satellites that could spot the unique infrared signature of a missile launch in order to grok its precise parabolic path and target.

After nuclear-armed submarines were refined in the ’70s, hundreds more missiles topped with warheads began to roam the world’s oceans, nearer to their targets, cutting the decision window in half, to 15 minutes or perhaps fewer. (Imagine one bobbing up along the Delaware coast, just 180 miles from the White House.) Even if the major nuclear powers never successfully develop new nuclear-missile technology, 15 minutes or fewer is frighteningly little time for a considered human response. But they are working to develop new missile technology, including hypersonic missiles, which Russia is already using in Ukraine to strike quickly and evade missile defenses. Both Russia and China want hypersonic missiles to eventually carry nuclear warheads. These technologies could potentially cut the window in half again.

These few remaining minutes would go quickly, especially if the Pentagon couldn’t immediately conclude that a missile was headed for the White House. The president may need to be roused from sleep; launch codes could be fumbled. A decapitation strike could be completed with no retaliatory salvo yet ordered. Somewhere outside D.C., command and control would scramble to find the next civilian leader down the chain, as a more comprehensive volley of missiles rained down upon America’s missile silos, its military bases, and its major nodes of infrastructure.

A first strike of this sort would still be mad to attempt, because some American nuclear forces would most likely survive the first wave, especially submarines. But as we have learned again in recent years, reckless people sometimes lead nuclear powers. Even if the narrowing of the decision window makes decapitation attacks only marginally more tempting, countries may wish to backstop their deterrent with a dead hand.

the united states is not yet one of those countries. After McGiffin and Lowther’s article was published, Lieutenant General John Shanahan, the director of the Pentagon’s Joint Artificial Intelligence Center, was asked about automation and nuclear weapons. Shanahan said that although he could think of no stronger proponent for AI in the military than himself, nuclear command and control “is the one area I pause.”

The Pentagon has otherwise been working fast to automate America’s war machine. As of 2021, according to a report that year, it had at least 685 ongoing AI projects, and since then it has continually sought increased AI funding. Not all of the projects are known, but a partial vision of America’s automated forces is coming into view. The tanks that lead U.S. ground forces in the future will scan for threats on their own so that operators can simply touch highlighted spots on a screen to wipe out potential attackers. In the F-16s that streak overhead, pilots will be joined in the cockpit by algorithms that handle complex dogfighting maneuvers. Pilots will be free to focus on firing weapons and coordinating with swarms of autonomous drones.

In January, the Pentagon updated its previously murky policy to clarify that it will allow the development of AI weapons that can make kill shots on their own. This capability alone raises significant moral questions, but even these AIs will be operating, essentially, as troops. The role of AI in battlefield command and the strategic functioning of the U.S. military is largely limited to intelligence algorithms, which simultaneously distill data streams gathered from hundreds of sensors—underwater microphones, ground radar stations, spy satellites. AI won’t be asked to control troop movements or launch coordinated attacks in the very near future. The pace and complexity of warfare may increase, however, in part because of AI weapons. If America’s generals find themselves overmatched by Chinese AIs that can comprehend dynamic, million-variable strategic situations for weeks on end, without so much as a nap—or if the Pentagon fears that could happen—AIs might be placed in higher decision-making roles.

The precise makeup of America’s nuclear command and control is classified, but AI’s awesome processing powers are already being put to good use in the country’s early-alert systems. Even here, automation presents serious risks. In 1983, a Soviet early-alert system mistook glittering clouds above the Midwest for launched missiles. Catastrophe was averted only because Lieutenant Colonel Stanislav Petrov—a man for whom statues should be raised—felt in his gut that it was a false alarm. Today’s computer-vision algorithms are more sophisticated, but their workings are often mysterious. In 2018, AI researchers demonstrated that tiny perturbations in images of animals could fool neural networks into misclassifying a panda as a gibbon. If AIs encounter novel atmospheric phenomena that weren’t included in their training data, they may hallucinate incoming attacks.

But put hallucinations aside for a moment. As large language models continue to improve, they may eventually be asked to generate lucid text narratives of fast-unfolding crises in real time, up to and including nuclear crises. Once these narratives move beyond simple statements about the number and location of approaching missiles, they will become more like the statements of advisers, engaged in interpretation and persuasion. AIs may prove excellent advisers—dispassionate, hyperinformed, always reliable. We should hope so, because even if they are never asked to recommend responses, their stylistic shadings would undoubtedly influence a president.

Given wide enough leeway over conventional warfare, an AI with no nuclear-weapons authority could nonetheless pursue a gambit that inadvertently escalates a conflict so far and so fast that a panicked nuclear launch follows. Or it could purposely engineer battlefield situations that lead to a launch, if it thinks the use of nuclear weapons would accomplish its assigned goals. An AI commander will be creative and unpredictable: A simple one designed by OpenAI beat human players at a modified version of Dota 2, a battle simulation game, with strategies that they’d never considered. (Notably, it proved willing to sacrifice its own fighters.)

These more far-flung scenarios are not imminent. AI is viewed with suspicion today, and if its expanding use leads to a stock-market crash or some other crisis, these possibilities will recede, at least for a time. But suppose that, after some early hiccups, AI instead performs well for a decade or several decades. With that track record, it could perhaps be allowed to operate nuclear command and control in a moment of crisis, as envisioned by Schneider’s war-game participants. At some point, a president might preload command-and-control algorithms on his first day in office, perhaps even giving an AI license to improvise, based on its own impressions of an unfolding attack.

Much would depend on how an AI understands its goals in the context of a nuclear standoff. Researchers who have trained AI to play various games have repeatedly encountered a version of this problem: An AI’s sense of what constitutes victory can be elusive. In some games, AIs have performed in a predictable manner until some small change in their environment caused them to suddenly shift their strategy. For instance, an AI was taught to play a game where players look for keys to unlock treasure chests and secure a reward. It did just that until the engineers tweaked the game environment, so that there were more keys than chests, after which it started hoarding all the keys, even though many were useless, and only sometimes trying to unlock the chests. Any innovations in nuclear weapons—or defenses—could lead an AI to a similarly dramatic pivot.

Any country that inserts AI into its command and control will motivate others to follow suit, if only to maintain a credible deterrent. Michael Klare, a peace-and-world-security-studies professor at Hampshire College, has warned that if multiple countries automate launch decisions, there could be a “flash war” analogous to a Wall Street “flash crash.” Imagine that an American AI misinterprets acoustic surveillance of submarines in the South China Sea as movements presaging a nuclear attack. Its counterstrike preparations would be noticed by China’s own AI, which would actually begin to ready its launch platforms, setting off a series of escalations that would culminate in a major nuclear exchange.

in the early ’90s, during a moment of relative peace, George H. W. Bush and Mikhail Gorbachev realized that competitive weapons development would lead to endlessly proliferating nuclear warheads. To their great credit, they refused to submit to this arms-race dynamic. They instead signed the Strategic Arms Reduction Treaty, the first in an extraordinary sequence of agreements that shrank the two countries’ arsenals to less than a quarter of their previous size.

History has since resumed. Some of those treaties expired. Others were diluted as relations between the U.S. and Russia cooled. The two countries are now closer to outright war than they have been in generations. On February 21 of this year, less than 24 hours after President Joe Biden strolled the streets of Kyiv, Russian President Vladimir Putin said that his country would suspend its participation in New START, the last arsenal-limiting treaty that remains in effect. Meanwhile, China now likely has enough missiles to destroy every major American city, and its generals have reportedly grown fonder of their arsenal as they have seen the leverage that nuclear weapons have afforded Russia during the Ukraine war. Mutual assured destruction is now a three-body problem, and every party to it is pursuing technologies that could destabilize its logic.

The next moment of relative peace could be a long way away, but if it comes again, we should draw inspiration from Bush and Gorbachev. Their disarmament treaties were ingenious because they represented a recovery of human agency, as would a global agreement to forever keep AI out of nuclear command and control. Some of the scenarios set forth here may sound quite distant, but that’s more reason to think about how we can avoid them, before AI reels off an impressive run of battlefield successes and its use becomes too tempting.

A treaty can always be broken, and compliance with this one would be particularly difficult to verify, because AI development doesn’t require conspicuous missile silos or uranium-enrichment facilities. But a treaty can help establish a strong taboo, and in this realm a strongly held taboo may be the best we can hope for. We cannot encrust the Earth’s surface with automated nuclear arsenals that put us one glitch away from apocalypse. If errors are to deliver us into nuclear war, let them be our errors. To cede the gravest of all decisions to the dynamics of technology would be the ultimate abdication of human choice.



READ MORE
  


A Brutal Sex Trade Built for American SoldiersCho Soon-ok, a former comfort woman for the G.I.s, inside the former detention center in Dongducheon. (photo: Jean Chung/NYT)

A Brutal Sex Trade Built for American Soldiers
Choe Sang-Hun, The New York Times
Sang-Hun writes: "It’s a long-buried part of South Korean history: women compelled by force, trickery or desperation into prostitution, with the complicity of their own leaders."   

It’s a long-buried part of South Korean history: women compelled by force, trickery or desperation into prostitution, with the complicity of their own leaders.


When Cho Soon-ok was 17 in 1977, three men kidnapped and sold her to a pimp in Dongducheon, a town north of Seoul.

She was about to begin high school, but instead of pursuing her dream of becoming a ballerina, she was forced to spend the next five years under the constant watch of her pimp, going to a nearby club for sex work. Her customers: American soldiers.

The euphemism “comfort women” typically describes Korean and other Asian women forced into sexual slavery by the Japanese during World War II. But the sexual exploitation of another group of women continued in South Korea long after Japan’s colonial rule ended in 1945 — and it was facilitated by their own government.



READ MORE
 


Biden to Send 1,500 Active-Duty Troops to the Southern BorderThe Biden administration fears that the lifting of a public health rule that allowed officials to quickly expel migrants could lead to an increase in border crossings. (photo: Go Nakamura/NYT)

Biden to Send 1,500 Active-Duty Troops to the Southern Border
Helene Cooper and Zolan Kanno-Youngs, The New York Times
Excerpt: "President Biden is sending 1,500 active-duty troops to the southern U.S. border with Mexico, officials said on Tuesday, as the administration braces for a possible influx of migrants seeking to take advantage of the lifting of Covid-era restrictions on asylum." 


The easing of Covid-era restrictions could lead to an increase in migrants, officials say.


President Biden is sending 1,500 active-duty troops to the southern U.S. border with Mexico, officials said on Tuesday, as the administration braces for a possible influx of migrants seeking to take advantage of the lifting of Covid-era restrictions on asylum.

Brig. Gen. Patrick S. Ryder, a Pentagon spokesman, told reporters that the troops would fill gaps in transportation, warehouse support, narcotics detection, data entry and other areas.

The Pentagon said the additional troops would be armed for self-defense but would not have a law enforcement role.



READ MORE
  


‘They Can Survive Just Fine’ Bernie Sanders Says Income Over $1 Billion Should Be Taxed at 100%Bernie Sanders decried the wealth of Walmart's Walton family – $225 billion – while the retail giant’s employees depend on government assistance to survive. (photo: Evelyn Hockstein/Reuters)

‘They Can Survive Just Fine’ Bernie Sanders Says Income Over $1 Billion Should Be Taxed at 100%
Ramon Antonio Vargas, Guardian UK
Vargas writes: "The independent Vermont senator argued against unfettered capitalism in an interview with journalist Chris Wallace." 



The independent Vermont senator argued against unfettered capitalism in an interview with journalist Chris Wallace


The US government should confiscate 100% of any money that Americans make above $999m, the leftwing independent senator Bernie Sanders said late last week.

Sanders expressed that belief in an exchange on Friday evening with the host of Who’s Talking to Chris Wallace? on HBO Max.

Wallace had asked Sanders about the general assertion in his book It’s OK to Be Angry About Capitalism that billionaires should not exist.

“Are you basically saying that once you get to $999m that the government should confiscate all the rest?” Wallace asked the US senator from Vermont, who is an independent but caucuses with Democrats and has helped them attain their current slim majority in the upper congressional chamber.

“Yeah,” Sanders replied. “You may disagree with me but, fine, I think people can make it on $999m. I think that they can survive just fine.”

Wallace had earlier mentioned how the late Sam Walton could make the giant retail chain Walmart the largest single private employer in the US thanks to his family’s net worth of about $225bn. Sanders countered that Walmart in many cases pays starvation wages to its 1.2 million employees despite how rich the Waltons are.

“Many of their workers are on Medicaid or food stamps,” Sanders said, referring to forms of government assistance for which low-income Americans can qualify. “In other words, taxpayers are subsidizing the wealthiest family in the country. Do I think that’s right? No, I don’t.”

Nonetheless, Sanders said his comments on the matter weren’t a personal attack against the Waltons or other billionaires.

“It is an attack upon a system,” Sanders said. “You can have a vibrant economy without [a few] people owning more wealth than the bottom half of American society” combined.

He added that if he were in charge: “If you make a whole lot of money, you’re going to pay a whole lot of money.”

Sanders’s remarks are unlikely to ingratiate him with proponents of the US political right who already dismiss him as a communist. But they have never been a part of his base of supporters.

The 81-year-old has held one of Vermont’s US Senate seats since 2007. He had spent the previous 16 years representing the state in the US House of Representatives, helping him become the longest-serving independent in American congressional history.

Sanders, who has previously run unsuccessfully to become a Democratic presidential candidate, published It’s OK to Be Angry About Capitalism in February. In it, he notes that one-tenth of 1% of the US population owns 90% of the nation’s wealth, among other things.

He also argues that “unfettered capitalism … destroys anything that gets in its way in the pursuit of profits”, including the environment, democracy and human rights.

On Friday, Sanders told Wallace that he believes “people who work hard and create businesses should be rich”, but the concept of some being billionaires offended him deeply when a half-million Americans are homeless and 85 million of them cannot afford to buy health insurance.



READ MORE
   


Nearly a Third of Nurses Nationwide Say They Are Likely to Leave the ProfessionMiriala Gonzalez, a registered nurse in Miami, carries a monkeypox vaccine. A new survey highlights major concerns from nurses nationwide regarding future staffing levels in hospitals. (photo: Joe Raedle/NPR)

Nearly a Third of Nurses Nationwide Say They Are Likely to Leave the Profession
Jaclyn Diaz, NPR
Diaz writes: "Close to a third of nurses nationwide say they are likely to leave the profession for another career due to the COVID-19 pandemic, a new survey from AMN Healthcare shows." 

Close to a third of nurses nationwide say they are likely to leave the profession for another career due to the COVID-19 pandemic, a new survey from AMN Healthcare shows.

This level is up at least seven points since 2021. And the survey found that the ongoing shortage of nurses is likely to continue for years to come.

About 94% of nurses who responded to the AMN Healthcare survey said that there was a severe or moderate shortage of nurses in their area, with half saying the shortage was severe. And around 89% of registered nurses (RNs) said the nursing shortage is worse than five years ago.

Nurses aren't optimistic about the future, either. At least 80% of those surveyed expect that to get much worse in another five years, the report shows.

Unions representing nurses have long warned about the problem facing the profession, said National Nurses United President Deborah Burger and President of SEIU Healthcare 1199NW Jane Hopkins. Both women are also RNs.

"It's a critical moment in our time for nurses. The country needs nurses. We are very short and we are feeling very worried about the future of their work," Hopkins said.

The COVID-19 pandemic certainly exacerbated problems, but short staffing was an issue even before then, Burger and Hopkins said.

"The staffing crisis didn't just happen. It's been around for years. Unions have been sounding the alarm that organizations were putting profits before patients," Hopkins said. Employers "had cut staffing so bad, that there was no room for flexibility."

She said she hears from members that they rarely have time to eat lunch or use the bathroom during their shifts.

Low staffing has a dangerous trickle-down effect, Burger said. It leads to a heavier workload, more stress and burnout for the remaining staff, as well as a negative impact to patient care.

The AMN Healthcare survey findings indicated younger generations of nurses were also less satisfied with their jobs compared to their older counterparts.

But even before the pandemic, the younger generation had signaled they were done with nursing, Hopkins said. "First and second year nurses were leaving the profession at a higher rate because it's not what they expected. This escalated during the pandemic," she said.

Across generations, a higher percentage of nurses also reported dealing with a greater deal of stress at their job than in previous years, the survey said. Four in five nurses experience high levels of stress at work — an increase of 16 points from 2021.

Similarly, a higher level of nurses reported feeling emotionally drained from the 2021 survey — up at least 15% in two years (62% to 77%).

One source of that stress? Nurses are also experiencing an increasing level workplace violence in the hospitals, Burger said.

"Nurses don't feel safe in many of the hospitals around the country. And we've heard horrendous stories. That also gets tied back into short staffing," she said.

Nurses have been fighting for better working conditions

This discontent among staff has deeper implications for hospitals and other organizations across the country.

In January, around 7,000 nurses in New York went on strike over a contract dispute with hospitals in the city. The nurses were looking for higher wages and better working conditions. This strike forced several hospitals to divert patients elsewhere.

Vox reported in January that nurses and other healthcare workers have frequently gone on strike in recent years. In 2022, eight of the 25 work stoppages involving 1,000 or more workers in the U.S. were done by nurses.

National Nurses United has issued a number of its own reports and surveys about the current state of the profession, which have come to similar conclusions to the AMN survey. The union has lobbied Congress hard to pass legislation that address staffing ratios and improve workplace safety provisions.

The AMN Healthcare survey similarly recommended that health care providers create safer working environments and broader regulatory changes to make meaningful differences.

Burger was more direct.

"Stop studying it and start actually legislating. Congress knows that they need to do something," Burger said.

"It's concerning that there's a lot of hand wringing," she said, but nothing is being done.



READ MORE
  


Uganda Parliament Passes Harsh Anti-LGBTQ BillBill includes death penalty, long jail terms. (photo: AFP)

Uganda Parliament Passes Harsh Anti-LGBTQ Bill
Reuters
Excerpt: "Uganda's parliament on Tuesday passed one of the world's strictest anti-LGBTQ bills mostly unchanged, including provision for long jail terms and the death penalty, after the president requested some parts of the original legislation be toned down." 


Uganda's parliament on Tuesday passed one of the world's strictest anti-LGBTQ bills mostly unchanged, including provision for long jail terms and the death penalty, after the president requested some parts of the original legislation be toned down.

The new bill retains most of the harshest measures of the legislation adopted in March, which drew condemnation from the United States, European Union, United Nations and major corporations.

The provisions retained in the new bill allow for the death penalty in cases of so-called "aggravated homosexuality", a term the government uses to describe actions including having gay sex when HIV-positive.

It allows a 20-year sentence for promoting homosexuality, which activists say could criminalise any advocacy for the rights of lesbian, gay, bisexual, transgender and queer citizens.

The legislation now heads back to President Yoweri Museveni, who can sign it, veto it or return it again to parliament.

Museveni, a vocal opponent of LGBTQ rights, has signalled he intends to sign the legislation once certain changes are made, including the addition of measures to "rehabilitate" gay people.

It was not immediately clear if the new bill satisfied his requests, and his office was not available for comment.

The legislation was amended to stipulate that merely identifying as LGBTQ is not a crime. It also revised a measure that obliged people to report homosexual activity to only require reporting when a child is involved.

'USELESS' AMENDMENT

Human rights activist Adrian Jjuuko dismissed the first amendment regarding LGBTQ identification as "useless".

"In practice, the police doesn't care about whether you've committed the act or not. They will arrest you for acting like gay, walking like gay," he said.

Same-sex relations are already illegal in Uganda under a British colonial-era law. LGBTQ individuals routinely face arrest and harassment by law enforcement, and passage of the bill in March unleashed a wave of arrests, evictions and mob attacks, members of the community say.

Proponents of the bill say broad legislation is needed to counter what they allege, without evidence, are efforts by LGBTQ Ugandans to recruit children into homosexuality.

After a voice vote on Tuesday that followed less than a half-hour of debate, parliament speaker Anita Among urged lawmakers to remain defiant in the face of international criticism.

"Let's protect Ugandans, let's protect our values, our virtues," Among said. "The Western world will not come and rule Uganda."

Western governments suspended aid, imposed visa restrictions and curtailed security cooperation in response to another anti-LGBTQ law Museveni signed in 2014. That law was nullified within months by a domestic court on procedural grounds.

The U.S. government said last week that it was assessing the implications of the looming law for activities in Uganda under its flagship HIV/AIDS programme.


READ MORE
 


Groups to Sue Federal Officials Over Manatee ProtectionManatees float in a canal at Port Everglades, Jan. 18, 2023, in Fort Lauderdale, Fla. Several conservation groups announced Tuesday, May 2, 2023, that they're planning to sue federal wildlife officials, citing a failure to protect the West Indian manatee following record death rates in recent years. (photo: Lynne Sladky/AP)

Groups to Sue Federal Officials Over Manatee Protection
Associated Press
Excerpt: "Several conservation groups announced Tuesday that they’re planning to sue federal wildlife officials, citing a failure to protect the West Indian manatee following record death rates in recent years." 


Several conservation groups announced Tuesday that they’re planning to sue federal wildlife officials, citing a failure to protect the West Indian manatee following record death rates in recent years.

The Center for Biological Diversity, Harvard Animal Law … Policy Clinic, Miami Waterkeeper and Frank S. González García filed a notice of their intention to sue the U.S. Fish and Wildlife Service. The notice is required by law before suing a federal agency.

The legal notice follows a November petition urging FWS to reclassify the species from threatened to endangered under the Endangered Species Act. The conservation groups said FWS was required by law to determine within 90 days whether the petition presents substantial information indicating the reclassification may be warranted. No findings have been issued yet, even though more than 150 days have gone by, the groups said.

“I’m appalled that the Fish and Wildlife Service hasn’t responded to our urgent request for increased protections for these desperately imperiled animals,” Center for Biological Diversity attorney Ragan Whitlock said in a statement. “It’s painfully clear that manatees need full protection under the Endangered Species Act, and they need it now.”

Manatees are gentle, round-tailed giants sometimes known as sea cows that weigh as much as 1,200 pounds (550 kilograms) and can live as long as 65 years. Manatees are Florida’s official state marine mammal but are listed as a threatened species, with many dying from starvation due to pollution-fueled algae blooms killing the seagrass they rely on for food. They also face peril from boat strikes and toxic red tide algae outbreaks along the state’s Gulf coast. Their closest living relative is the elephant.

Since having their legal protections reduced in 2017 when the species was reclassified as threatened instead of endangered, manatee numbers have declined dramatically, conservationists said.

Fueled by wastewater treatment discharges, leaking septic systems and fertilizer runoff, algae blooms can become so thick that seagrass can’t get the sunlight it needs to survive, jeopardizing the manatees’ main food supply. That’s contributed to the deaths of nearly 2,000 manatees in the past two years, representing more than 20% of all manatees in Florida, the groups said.

An FWS spokesperson said the agency doesn’t comment on proposed or pending litigation.


READ MORE

 

Contribute to RSN

Follow us on facebook and twitter!

Update My Monthly Donation

PO Box 2043 / Citrus Heights, CA 95611







No comments:

Post a Comment

Note: Only a member of this blog may post a comment.

SELF-OWN: Conservative Claims His Marriage Failed Because of Feminism

6:08 Trump’s Education Pick Caught In HUGE Scandal! by Rebel HQ   Rebel HQ 1.04M subscribers #News #Politics #DavidShuster Subscribe t...