RISKS-LIST: Risks-Forum Digest Wednesday 11 October 2023 Volume 33 : Issue 89
ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks) Peter G. Neumann, founder and still moderator
***** See last item for further information, disclaimers, caveats, etc. ***** This issue is archived at <
http://www.risks.org> as
<
http://catless.ncl.ac.uk/Risks/33.89>
The current issue can also be found at
<
http://www.csl.sri.com/users/risko/risks.txt>
Contents:
Autonomous Vehicles Are Driving Blind (NYTimes)
How a Series of Air Traffic Control Lapses Nearly Killed 131 People
(NYTimes)
A private jet took evasive action to avoid a fighter plane in Austin
(WashPost)
How Israel's Feared Security Services Failed to Stop Hamas~<'s Attack
(NYTimes)
What was 60 Minutes thinking, in that interview with Geoff Hinton?
(Substack)
Your Medical Devices Are Getting Smarter. Can the FDA Keep Them Safe? (WSJ) Fake at scale: Generative AI looms over global elections cycle
(Politico Europe)
Amazon's Alexa has been claiming the 2020 election was stolen (WashPost) Verified accounts spread fake news release about a Biden $8-billion
aid package to Israel (NBC News)
Airworthiness Directive Mandates Garmin Autopilot Software Fix (AVweb)
Inside the final seconds of a deadly Tesla Autopilot crash (WashPost)
Why a search engine that scans your face is dangerous (NPR)
How Amazon's Ring camera network alters L.A. neighborhoods (LA Times)
Connected cars' dirty little secret: They're the trailing edge of 5G
adoption (Light Reading)
Vermont Utility Plans to End Outages by Giving Customers Batteries (NYTimes) Google is making their weak and flawed passkey system the default login
method -- I urge you NOT to use them! (Lauren Weinstein)
Vietnam tried to hack U.S. officials, CNN with posts on X, probe finds
(WashPost)
California's 'right to repair' bill is now California's 'right to repair'
law (Engadget)
Airbnb guest in luxury rental has refused to leave or pay (L.A. Times)
WhatsApp says warnings of a cyberattack targeting Jewish people are baseless
(NBC News)
Inside FTX's All-Night Race to Stop a Billion Crypto Heist (WiReD)
Re: False news spreads faster than the truth (Martin Ward)
Re: Rooftop Solar ongoing maintenance issues (David E. Ross)
Re: Google accused of directing motorist to drive off collapsed bridge
(Jim Geissman)
Abridged info on RISKS (comp.risks)
----------------------------------------------------------------------
Date: Wed, 11 Oct 2023 19:20:53 -0600
From: Matthew Kruk <
mkrukg@gmail.com>
Subject: Autonomous Vehicles Are Driving Blind (NYTimes)
https://www.nytimes.com/2023/10/11/opinion/driverless-cars-san-francisco.html
In San Francisco this month, a woman suffered traumatic injuries from being struck by a driver and thrown into the path of one of hundreds of
self-driving cars roaming the city's streets. San Francisco's fire chief, Jeanine Nicholson, recently testified that as of August, autonomous vehicles interfered with firefighting duties 55 times this year. Tesla's autopilot software, a driver-assistance system, has been involved in 736 crashes and
17 fatalities nationwide since 2019.
For all the ballyhoo over the possibility of artificial intelligence threatening humanity someday, there's remarkably little discussion of the
ways it is threatening humanity right now. When it comes to self-driving
cars, we are driving blind.
------------------------------
Date: Wed, 11 Oct 2023 09:01:01 -0400
From: Monty Solomon <
monty@roscom.com>
Subject: How a Series of Air Traffic Control Lapses Nearly Killed
131 People (NYTimes)
Two planes were moments from colliding in Texas, a harrowing example of the country's fraying air-safety system, a *New York Times* investigation found.
https://www.nytimes.com/2023/10/11/business/air-traffic-control-austin-airport-fedex-southwest.html
------------------------------
Date: Tue, 10 Oct 2023 23:26:57 -0400
From: Monty Solomon <
monty@roscom.com>
Subject: A private jet took evasive action to avoid a fighter plane in
Austin (WashPost)
The aircraft came within 200 feet of one another, according to a preliminary FAA account, in an incident that also involved a third aircraft.
https://www.washingtonpost.com/transportation/2023/10/10/austin-near-miss-military-private-jet/
------------------------------
From: Monty Solomon <
monty@roscom.com>
Date: Wed, 11 Oct 2023 09:01:01 -0400
Subject: How Israel's Feared Security Services Failed to Stop Hamas's Attack
(NYTimes)
Israel’s military and espionage services are considered among the world's best, but on Saturday, operational and intelligence failures led to the
worst breach of Israeli defenses in half a century.
https://www.nytimes.com/2023/10/10/world/middleeast/israel-gaza-security-failure.html
[This is way beyond the ability of RISKS to encompass. See
* Thomas Friedman, This Hamas-Israeli Fight Will Send Shock Waves
Far Away, NYTimes opinion, 9 Oct 2023
[Almost Everything is Interrelated. PGN]
* Bret Stephens, The Yom Kippur War Led to Peace. This One Can. too.
NYTimes opinion, 9 Oct 2023
* The Editorial Board, The Attack on Israel Demands Unity and Resolve,
10 Oct 2023
* Thomas Friedman, Israel Has Never Needed to be Smarter Than Now,
NYTimes opinion, 11 Oct 2023
* The Anti-Israel Left Needs to Take a Hard Look at Itself
NYTimes opinion, 11 Oct 2023
PGN]
------------------------------
Date: Tue, 10 Oct 2023 23:59:30 -0400
From: Gabe Goldberg <
gabe@gabegold.com>
Subject: What was 60 Minutes thinking, in that interview with Geoff
Hinton? (Substack)
Scott Pelley didn’t exactly do his homework
Scott Pelley: Does humanity know what it's doing?
Geoffrey Hinton: No.
Gary Marcus: I tend to agree. When it comes to AI in particular, we are
getting way ahead of our skis, rushing forward a technology we don’t fully understand. For all the differences we have had over the years, I salute you for speaking out.
Geoffrey Hinton: I think we're moving into a period when for the first time ever we may have things more intelligent than us.
Scott Pelley: You believe they can understand?
Geoffrey Hinton: Yes.
Scott Pelley: You believe they are intelligent?
Geoffrey Hinton: Yes.
Gary Marcus: As it happens I sharply disagree with all three of the points Geoff just made. To be sure, it’s all partly definitional. But I don’t we are all that close to machines that are more intelligent than us, I don’t think they really understand the things that they say, and I don’t think
they are intelligent in the sense of being able to adaptively and flexibly reason about things they haven’t encountered before, in a reliable way. What Geoff has left out is any reference to all of the colossally stupid and ungrounded things generative AI systems do routinely, like fabricating the other night that Liz Cheney had replaced Kevin McCarthy as Speaker, by
220-215 vote that never happened, or learning that Tom Cruise's is the son
of Mary Pfeiffer and yet not being able to infer that Mary Pfeiffer is Tom Cruise’s mother, or claiming that two pounds of feathers weigh less than one pound of bricks. Geoff himself wrote a classic paper about trying to get neural networks to infer family relationships, almost forty years ago; it’s embarrassing to see these systems still struggle on such basic
problems. Since they can’t reliably solve them, I don’t think we should attribute "understanding” to them, at least not in any remotely deep sense
of the word understanding. Emily Bender and Timnit Gebru have called these systems “stochastic parrots”, which in my view is a little unkind to parrots
-– but also vividly captures something real: a lot of what we are seeing now is a kind of unreliable mimicry. I really wish you could have addressed both the question of mimicry and of reliability. (Maybe next time?) I don’t see how you can call an agent with such a loose grip on reality all that intelligent, nor how you can simply ignore the role of mimicry in all this.
https://open.substack.com/pub/garymarcus/p/what-was-60-minutes-thinking-in-that
------------------------------
Date: Wed, 11 Oct 2023 07:47:48 +0000
From: Richard Marlon Stein <
rmstein@protonmail.com>
Subject: Your Medical Devices Are Getting Smarter. Can the FDA Keep Them
Safe? (WSJ)
https://www.wsj.com/tech/ai/your-medical-devices-are-getting-smarter-can-the-fda-keep-up-acc182e8?mod=tech_lead_story (use
https://history-computer.com/how-to-read-articles-behind-a-paywall/ to bypass paywall).
The WSJ's headline is oxymoronic.
The FDA is attempting to adapt medical device regulations to accommodate
AI's ability to learn and, thereby, improve patient outcomes by evolving
device capabilities without re-qualification processes as traditional practiced. The medical industrial complex's adoption of AI promotes
extractive profit while compromising improved patient outcome experience, a recipe to accelerate consumer brand outrage and trust erosion.
Medical device safety is an important FDA mission objective, but annual
medical device reporting (MDR) for popular implanted devices are disturbing
for at least two reasons: (1) The product code report densities, which aggregate MDRs for similar devices among manufacturers, tend to grow each
year. These increments usually indicate greater deployment; and, (2)
aggregate device implantation numbers are NOT published annually, but MDRs
are required. Informed consumer device comparisons are impossible. We know
the equivalent of "product defect escapes," but not the total number of deployed devices.
Read too many inappropriate shock, accelerated battery depletion, and defibrillator over-sensing MDRs and a suspicion arises: black-box AI will
NOT favorably impact patient outcome expectations. False negative/positive event density and under-performing device area under curve (AUC) values will harm patient quality of life. AI can't detect if a defibrillator electrode cauterized after implantation. Electrode fracture? Could it learn enough to automatically (and safely) adjust amplifier gain without human inspection of the ECG waveform? Can AI tell if a defibrillator electrode dislodged?
Patient syncope? Pericarditis?
Sanitize AI training datasets bias, strengthen corporate governance accountability by limiting indemnification privileges for medical device
CxOs and boards, and apply and rigorously enforce NIST SP 800-53 control families to manufacturer's SDLC to yield greater patient benefit and build brand trust. Suppress defect escape. Spare consumers from the hackneyed "AI-enhanced, smart defibrillator" TV advertisements. Softw are toxic waste
is neither smart nor enhanced.
What follows are CSV records extracted from the FDA's TPLC (Total Product
Life Cycle) platform from 01JAN2020 to 30SEP2023 for "top-10" device and product MDRs on product codes LYJ, LWS, and DXY. See
https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/c fTPLC/tplc.cfm and set
the Year to 2020 and populate the Product Code to retrieve the records
below.
Device: Stimulator, autonomic nerve, implanted for epilepsy
Product Code: LYJ
MDR Year,MDR Reports,MDR Events
2020,1688,1688
2021,1764,1764
2022,1584,1584
2023,1327,1327
Device Problems,MDRs with this Device Problem,Events in those MDRs
Adverse Event Without Identified Device or Use Problem,3447,3447 Fracture,1396,1396
High impedance,691,691
Low impedance,177,177
Premature Discharge of Battery,159,159
Naturally Worn,140,140
Device Contamination with Body Fluid,138,138
Corroded,99,99
False Alarm,82,82
Premature End-of-Life Indicator,79,79
No Clinical Signs, Symptoms or Conditions,1891,1891
Convulsion, Clonic,1172,1172
No Known Impact Or Consequence To Patient,706,706
Seizures,496,496
Post Operative Wound Infection,312,312
Appropriate Clinical Signs, Symptoms, Conditions Term / Code Not Available,257,257
Unspecified Infection,172,172
Implant Pain,169,169
Neck Pain,161,161
Paralysis,144,144
Device: Implantable pacemaker pulse-generator
Product Code: DXY
MDR Year,MDR Reports,MDR Events
2020,546,546
2021,560,560
2022,784,784
2023,688,688
Device Problems,MDRs with this Device Problem,Events in those MDRs
Adverse Event Without Identified Device or Use Problem,511,511
Premature Discharge of Battery,249,249
Over-Sensing,248,248
Failure to Interrogate,138,138
Pacing Problem,134,134
Pacemaker Found in Back-Up Mode,131,131
Failure to Capture,123,123
Signal Artifact/Noise,111,111
High Capture Threshold,111,111
Under-Sensing,100,100
Patient Problems,MDRs with this Patient Problem,Events in those MDRs
No Clinical Signs Symptoms or Conditions,1244,1244
Unspecified Infection,320,320
No Known Impact Or Consequence To Patient,199,199
Insufficient Information,181,181
No Consequences Or Impact To Patient,123,123
Shock from Patient Lead(s),77,77
Arrhythmia,63,63
Syncope/Fainting,59,59
Discomfort,33,33
Device: Implantable cardioverter defibrillator (non-crt)
Product Code: LWS
Definition: These devices treat tachycardia (fast heartbeats) with RV defibrillation
therapy as necessary.
MDR Year,MDR Reports,MDR Events
2020,16910,16910
2021,19659,19659
2022,23463,23463
2023,22052,22052
Device Problems,MDRs with this Device Problem,Events in those MDRs Over-Sensing,18722,18722
Premature Discharge of Battery,13154,13154
High impedance,12928,12928
Adverse Event Without Identified Device or Use Problem,12535,12535 Inappropriate/Inadequate Shock/Stimulation,10956,10956
Signal Artifact/Noise,10003,10003
Fracture,5760,5760
Impedance Problem,4502,4502
Battery Problem,4397,4397
High Capture Threshold,4255,4255
Patient Problems,MDRs with this Patient Problem,Events in those MDRs
No Clinical Signs, Symptoms or Conditions,42509,42509
Unspecified Infection,10309,10309
Electric Shock,6623,6623
No Known Impact Or Consequence To Patient,5189,5189
No Consequences Or Impact To Patient,4556,4556
Shock from Patient Lead(s),3538,3538
Insufficient Information,3370,3370
No Code Available,3335,3335
Sepsis,1619,1619
Pocket Erosion,885,885
------------------------------
Date: Tue, 10 Oct 2023 11:37:39 PDT
From: Peter G Neumann
Subject: Fake at scale: Generative AI looms over global elections cycle
(Politico Europe)
Gian Volpicelli (with Mark Scott contributing), POLITICO Europe,
9 Oct 2023
For fans of democracy, the rise of super-charged generative artificial intelligence couldn't have come at a worse time.
The United States, European Union parliament, United Kingdom, Poland, the Netherlands and potentially Ukraine will all hold elections in the next 16 months. Those working to keep election integrity intact are warily eyeing
the advent of generative AI as a way to stoke up already heavily polarized political debates, threats of foreign influence and fake news.
For years, AI tools capable of forging convincing images, audio and videos of existing individuals -- so-called deepfakes - have drawn warnings of
what havoc such an arsenal could wreak in the hands of disinformation peddlers.But it wasn't until advanced AI models - like text-creating
ChatGPT and image-conjuring DALL-E-2 -- became widely available in late
2022 that the danger became palpable.
"The combination of AI and disinformation is the nightmare," the European Commission's digital honcho V=ECra Jourov=E1 said at the end of September
when discussing the EU's code of practice on online disinformation.
Henry Adjer, a visiting researcher at the University of Cambridge
specializing in deepfakes, said "these [AI] applications were previously prohibitively expensive or difficult to access for an everyday person. Now they're in consumer-facing apps, on websites, often free or very cheap,"
Adjer said.
With generative AI, falsehoods can be churned out quickly, convincingly and
at scale. A new generation of AI-powered disinformation is expected to
worsen existing societal divisions that have made many voters more polarized than ever before.
Last month's Slovak election, which handed a victory to populist Robert
Fico, gave an early taste of the confusion AI-generated disinformation could sow. Two eleventh-hour audio clips circulating online purportedly revealed Liberal politician Michal =A9ime=E8ka discussing how he planned to rig the election and hatching plans to -- God forbid -- increase the price of beer. Slovak fact-checkers attempted to verify the clips' authenticity and
eventually concluded they were likely created via AI. By the time they'd reached that conclusion, the clips had already been shared thousands of
times.
In Poland, which goes to the polls on October 15, centrist opposition party Civic Platform has been criticized for an attack ad on X mixing real footage
of right-wing Prime Minister Mateusz Morawiecki with AI-generated clips of
his voice. (Civic Platform flagged that the ad contained AI content in a separate post.)
Similarly, U.S. Republican presidential hopeful, Florida Governor Ron
DeSantis, ran an ad where real pictures of his rival Donald Trump and his pandemic-era health care adviser Anthony Fauci appeared side by side with AI-generated photos of Trump and Fauci hugging and kissing.
It's happening outside of Europe and the U.S. too. In Sudan, AI-made audio clips went viral on TikTok. In Venezuela, state media outlets have used software from U.K.-based firm Synthesia to create clips of nonexistent
Western journalists praising the country's economic performance.
"There are 60 of our 70 countries in which we found an example of the use of generative AI to manipulate political social information," said Allie Funk, a research director at nonprofit Freedom House, which this month published a report<
https://
freedomhouse.org/report/freedom-net/2023/repressive-power-artificial-intelligence> on AI's nefarious effects on democracy.
Platforms such as TikTok and Google have recently instituted policies to restrict or stave off AI-generated content, with Google requiring the disclosure of AI use in political ads.
Funk, however, was careful not to call generative AI "a game changer." While deepfake-detection tools are imperfect and online platforms are struggling
to quickly root out AI-generated falsehoods, none of the cases witnessed so
far have had a significant electoral influence.
But others warned the speed, ease and wide availability of the booming generative AI models is moving the needle for election integrity. Especially when it comes to conversational bots able to create high-quality text.
"These technologies will allow you to scale up 'friendships' in a new way," said Carl Miller, a researcher at the Demos think tank. "Imagine you could build thousands of parallel, meaningful conversations with a target
audience, where you don't just spam disinformation, but very gently
introduce false ideas."
------------------------------
Date: Sat, 7 Oct 2023 17:12:09 -0400
From: Jan Wolitzky <
jan.wolitzky@gmail.com>
Subject: Amazon's Alexa has been claiming the 2020 election was stolen
(WashPost)
Amid concerns the rise of artificial intelligence will supercharge the
spread of misinformation comes a wild fabrication from a more prosaic
source: Amazon's Alexa, which declared that the 2020 presidential election
was stolen.
Asked about fraud in the race -- in which President Biden defeated former president Donald Trump with 306 electoral college votes -- the popular voice assistant said it was stolen by a massive amount of election fraud, citing Rumble, a video-streaming service favored by conservatives.
The 2020 races were ``notorious for many incidents of irregularities and indications pointing to electoral fraud taking place in major metro
centers,'' according to Alexa, referencing Substack, a subscription
newsletter service. Alexa contended that Trump won Pennsylvania, citing
Can Alexa answers contributor.
https://www.washingtonpost.com/technology/2023/10/07/amazon-alexa-news-2020-election-misinformation/
[Risks? As I noted recently, we have completely lost the sense of ground
truth, and there seems to be no path back to sanity. Once again, the
truthiness has been exposed: No Virginia, There is No Sanity Clause.
PGN]
------------------------------
Date: Mon, 9 Oct 2023 18:21:40 -0400
From: Monty Solomon <
monty@roscom.com>
Subject: Verified accounts spread fake news release about a Biden
$8-billion aid package to Israel (NBC News)
The edited White House news release has sparked false headlines that rose to the top of Google search results.
https://www.nbcnews.com/tech/internet/verified-accounts-spread-fake-news-release-biden-8-billion-aid-package-rcna119372
------------------------------
Date: Tue, 10 Oct 2023 00:30:10 -0400
From: Gabe Goldberg <
gabe@gabegold.com>
Subject: Airworthiness Directive Mandates Garmin Autopilot Software Fix
(AVweb)
On 6 Oct 2023, the FAA proposed a new airworthiness directive requiring operators of thousands of aircraft to update Garmin autopilot software to address a flaw causing the autopilot to make unintended flight-control
According to the agency, the AD was issued in response to an incident
involving an F33A Bonanza experiencing “an un-commanded automatic pitch trim runaway when the autopilot was first engaged.”
The proposed rule states: “The affected autopilot system software does not properly handle certain hardware failures of the pitch trim servo. This
could result in an automatic uncommanded pitch trim runaway and loss of
control of the airplane.”
https://www.avweb.com/aviation-news/ad-mandates-garmin-autopilot-software-fix/
------------------------------
Date: Sat, 7 Oct 2023 16:51:22 -0400
From: Gabe Goldberg <
gabe@gabegold.com>
Subject: Inside the final seconds of a deadly Tesla Autopilot crash
(The Washington Post)
https://www.washingtonpost.com/technology/interactive/2023/tesla-autopilot-crash-analysis/?utm_source=alert&utm_medium=email&utm_campaign=wp_news_alert_revere&location=alert
Risks? People spewing blame in all directions, calling the analysis a hit
job by *The Post*, slamming them for rehashing old news, etc. Plus blaming
the truck driver. And defending Tesla, saying that driver shouldn't have engaged full self-driving. Well, yeah -- but car shouldn't have allowed
doing it on road it wasn't meant for. And saying that it's OK for a couple people to be killed using it as long as overall it's alleged to be safer
than human driving.
One risk *not* addressed in article is sides of such trailers lacking protection against cars running under as the Tesla did. And traveling 70 mph I'm not sure what's sometimes added for that protection would have let the driver survive.
------------------------------
Date: Wed, 11 Oct 2023 07:04:35 -0700
From: Steve Bacher <
sebmb1@verizon.net>
Subject: Why a search engine that scans your face is dangerous (NPR)
https://www.npr.org/2023/10/11/1204822946/facial-recognition-search-engine-ai-pim-eyes-google
Imagine strolling down a busy city street and snapping a photo of a stranger then uploading it into a search engine that almost instantaneously helps you identify the person.
This isn't a hypothetical. It's possible now, thanks to a website called PimEyes, considered one of the most powerful publicly available facial recognition tools online.
------------------------------
Date: Wed, 11 Oct 2023 20:20:55 -0400
From: Monty Solomon <
monty@roscom.com>
Subject: How Amazon's Ring camera network alters L.A. neighborhoods
(LA Times)
Cameras, cops and paranoia: How Amazon’s surveillance network alters
L.A. neighborhoods
https://www.latimes.com/business/technology/story/2023-10-11/cameras-cops-and-paranoia-how-amazons-surveillance-network-alters-l-a-neighborhoods
------------------------------
Date: Mon, 9 Oct 2023 03:00:00 -0400
From: Gabe Goldberg <
gabe@gabegold.com>
Subject: Connected cars' dirty little secret: They're the trailing
edge of 5G adoption (Light Reading)
At MWC Las Vegas, telecom industry execs suggested ways to pull out of a
tech deployment parking lot.
Connected cars as a trailing indicator
The program opened with TechInsights analyst Roger Lanctot outlining the box that automakers have put themselves in by sticking with LTE -— by that research firm's estimates, 5G won't show up on most new light-duty vehicles produced until 2027. And that looked optimistic compared to Qualcomm's
estimate of 2028 for 5G to cross 50%, as shared by product-management VP
Jeff Arnold in a later talk Thursday.
"If an automaker, I can do most of the applications we're talking about with LTE," Lanctot said. But while that's been cheaper in the short run, over the long term it will yield vehicles left offline, a risk carmakers should know from the forced retirement of GM's first-generation, AMPS-only OnStar
system: "LTE ain't gonna be around for 15-20 years."
https://www.lightreading.com/5g/connected-cars-dirty-little-secret-they-re-the- trailing-edge-of-5g-adoption
------------------------------
Date: Mon, 09 Oct 2023 19:27:31 +0000
From: Henry Baker <
hbaker1@pipeline.com>
Subject: Vermont Utility Plans to End Outages by Giving Customers
Batteries (NYTimes)
Terrific idea! How come it's taken this long for a utility to utilize the
advantages of a *distributed* power system to reduce the need for
long-distance power transmission?
I'm still waiting for one of the cellphone companies to start paying
homeowners to put nano cellsites on their roofs in order to avoid having
to build stand-alone cellsites/towers.
Vermont Utility Plans to End Outages by Giving Customers Batteries
Ivan Penn, 9 Oct 2023
https://www.nytimes.com/2023/10/09/business/energy-environment/green-mountain-h ome-batteries.html
Many electric utilities are putting up lots of new power lines as they
rely more on renewable energy and try to make grids more resilient in
bad weather. But a Vermont utility is proposing a very different
approach: It wants to install batteries at most homes to make sure its customers never go without electricity.
The company, Green Mountain Power, proposed buying batteries, burying
power lines and strengthening overhead cables in a filing with state
regulators on Monday. It said its plan would be cheaper than building
a lot of new lines and power plants.
The plan is a big departure from how U.S. utilities normally do
business. Most of them make money by building and operating power
lines that deliver electricity from natural gas power plants or wind
and solar farms to homes and businesses. Green Mountain--a relatively
small utility serving 270,000 homes and businesses--would still use
that infrastructure but build less of it by investing in
television-size batteries that homeowners usually buy on their own.
"Call us the un-utility," Mari McClure, Green Mountain's chief
executive, said in an interview before the company's filing. "We're
completely flipping the model, decentralizing it."
Like many places, Vermont has been hit hard this year by extreme
weather linked to climate change. Half a dozen severe storms,
including major floods in July, have caused power outages and damaged
homes and other buildings.
Those calamities and concerns about the rising cost of electricity
helped shape Green Mountain's proposal, Ms. McClure said. As the
company ran the numbers, it realized that paying recovery costs and
building more power lines to improve its system would cost a lot more
and take a lot longer than equipping homes with batteries.
Green Mountain's plan builds on a program it has run since 2015 to
lease Tesla home batteries to customers. Its filing asks the Vermont
Public Utility Commission to authorize it to initially spend $280
million to strengthen its grid and buy batteries, which will come from
various manufacturers.
The company expects to invest an estimated $1.5 billion over the next
seven years--money that it would recoup through electricity rates. The
utility said the investment was justified by the growing sum it had to
spend on storm recovery and to trim and remove trees around its power
lines.
The utility said it would continue offering battery leases to
customers who want them sooner. It will take until 2030 for the
company to install batteries at most homes under its new plan if
regulators approve it. Green Mountain says its goal to do away with
power outages will be realized by that year, meaning customers would
always have enough electricity to use lights, refrigerators and other essentials.
"We don't want the power to be off for our customers ever,"
Ms. McClure said. "People's lives are on the line. That is ultimately
at the heart of why we're doing what we're trying to do."
------------------------------
Date: Tue, 10 Oct 2023 07:56:01 -0700
From: Lauren Weinstein <
lauren@vortex.com>
Subject: Google is making their weak and flawed passkey system the default
login method -- I urge you NOT to use them!
Google continues to push ahead with its ill-advised scheme to force
passkeys on users who do not understand their risks, and will try push
all users into this flawed system starting imminently.
In my discussions with Google on this matter (I have chatted multiple
times with the Googler in charge of this), they have admitted that
their implementation, by depending completely on device authentication
security which for many users is extremely weak, will put many users
at risk of their Google accounts being compromised. However, they feel
that overall this will be an improvement for users who have strong authentication on their devices.
And as for ordinary people who already are left behind by Google when
something goes wrong? They'll get the shaft again. Google has ALWAYS
operated on this basis -- if you don't fit into their majority silos,
they just don't care. Another way for Google users to get locked out
of their accounts and lose all their data, with no useful help from
Google.
With Google's deficient passkey system implementation -- they refuse to consider an additional authentication layer for protection -- anyone who has authenticated access to your device (that includes the creep that watched
you access your phone in that bar before he stole it) will have full and unrestricted access to your Google passkeys and accounts on the same
basis. And when you're locked out, don't complain to Google, because they'll just say that you're not the user they're interested in.
"Thank you for choosing Google."
[and then the next day:
More on Google passkeys
To be clear, there's nothing inherently wrong with the concept of passkeys
-- IF implemented properly. The problem is that Google's specific implementation sucks so badly and puts so many users at risk, and that
combined with their horrific account recovery procedures that 1qlock so many innocent users away from their data permanently, is a recipe for many
already disadvantaged non-techie users to be even further shafted. -L
------------------------------
Date: Mon, 9 Oct 2023 11:00:41 -0400
From: Monty Solomon <
monty@roscom.com>
Subject: Vietnam tried to hack U.S. officials, CNN with posts on
X, probe finds (WashPost)
The targeting came as Vietnamese and American diplomats were negotiating a major cooperation agreement intended to counter growing Chinese influence in the region.
https://www.washingtonpost.com/technology/2023/10/09/vietnam-predator-hack-investigation/
------------------------------
Date: Wed, 11 Oct 2023 22:34:16 -0400
From: Monty Solomon <
monty@roscom.com>
Subject: California's 'right to repair' bill is now California's 'right
to repair' law (Engadget)
https://www.engadget.com/californias-right-to-repair-bill-is-now-californias-right-to-repair-law-232526782.html
[Monty also noted: California's newest law will make it easier to
delete personal online data
https://www.theverge.com/2023/10/11/23912548/california-delete-act-personal-data-single-request-online-data-brokers
PGN]
------------------------------
Date: Mon, 9 Oct 2023 22:20:40 -0400
From: Monty Solomon <
monty@roscom.com>
[continued in next message]
--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)