RISKS-LIST: Risks-Forum Digest Friday 11 August 2023 Volume 33 : Issue 77
ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks) Peter G. Neumann, founder and still moderator
***** See last item for further information, disclaimers, caveats, etc. ***** This issue is archived at <
http://www.risks.org> as
<
http://catless.ncl.ac.uk/Risks/33.77>
The current issue can also be found at
<
http://www.csl.sri.com/users/risko/risks.txt>
Contents: [Way backlogged. Here's a start at catch-up. PGN]
Failed communications left Maui residents trapped by fire,
unable to escape (LATimes)
Firmware vulnerabilities in millions of computers could
give hackers superuser status (Ars Technica)
Cyberattack Sabotages Medical Sites in Four States
(Rebecca Carballo)
UK electoral register hacked in August 2021 (The Guardian)
New acoustic attack steals data from keystrokes with 95%
(Bleeping Computer)
Downfall Attacks on Intel CPUs Steal Encryption Keys, Data
(Ionut Ilascu)
California privacy regulatorœôòùs first case: Probing
Internet-connected cars (WashPost)
Hackers Stole $6M from Connecticut public school system
Lola Fadulu)
VR Headsets Are Vulnerable to Hackers (UC Riverside)
Security and Human Behavior -- SHB 2023 (Bruce Schneier)
Typo sends millions of U.S. military emails to Russian ally Mali
(BBC)
Bots and Spam attack Meta's Threads (TechCrunch)
Facebook sent information on visitors to police *anonymous'
reporting* site (The Guardian)
Tech companies acknowledge machine-learning algorithms can perpetuate
discrimination and need improvement. (NYTimes)
Wikipedia's Moment of Truth? (NYTimes)
Why AI detectors think the U.S. Constitution was written by AI (Ars Technica) ChatGPT's Accuracy Has Gotten Worse (Andrew Paul)
In the Age of AI, Techœôòùs Little Guys Need Big Friends (NYTimes)
OpenAI's trust and safety lead is leaving the company
(Engadget)
AI That Teaches Other AI (Greg Hardesty)
Researchers Find Deliberate Backdoor in Police Radio Encryption Algorithm
(Kim Zetter)
Researchers Poke Holes in Safety Controls of ChatGPT, Othoer Chatbots
(Cade Metz)
Unpatchable AMD Chip Flaw Unlocks Paid Tesla Feature Upgrade
(Brandon Hill)
Eight-Months Pregnant Woman Arrested After False Facial
Recognition Match (Kashmir Hill)
MIT Makes Probability-Based Computing a Bit Brighter
(IEEE Spectrum)
Wikipediaœôòùs Moment of Truth (NYTimes)
Possible Typo Leads to Actual Scam (Bob Smith)
'Redacted Redactions' Strike Again (Henry Baker)
Re: Defective train safety controls lead to bus rides for South Auckland
commuters (George Neville-Neil)
Re: Myth about innovation ... (Henry Baker, Martyn Thomas,
John Levine)
Internet censorship (Gene Spafford)
Abridged info on RISKS (comp.risks)
----------------------------------------------------------------------
Date: Fri, 11 Aug 2023 13:02:46 -0400
From: Monty Solomon <
monty@roscom.com>
Subject: Failed communications left Maui residents trapped by fire,
unable to escape (LATimes)
https://www.latimes.com/world-nation/story/2023-08-11/failed-communication-and-huge-death-toll-in-maui-fires
------------------------------
Date: Fri, 21 Jul 2023 16:17:32 -0400
From: Monty Solomon <
monty@roscom.com>
Subject: Firmware vulnerabilities in millions of computers could
give hackers superuser status (Ars Technica)
https://arstechnica.com/security/2023/07/millions-of-servers-inside-data-centers-imperiled-by-flaws-in-ami-bmc-firmware/
------------------------------
Date: Mon, 7 Aug 2023 18:12:02 PDT
From: Peter Neumann <
neumann@csl.sri.com>
Subject: Cyberattack Sabotages Medical Sites in Four States
(Rebecca Carballo)
Rebecca Carballo, *The New York Times*, 7 Aug 2023
As hospitals go online, they become more vulnerable.
Ransomware. Prospect Medical Holdings in CA/CT/PA/RI
16 Hospitals, over 176 clinics affected. [PGN-ed in just
another demonstration of how untrustworthy this can be.]
------------------------------
Date: Tue, 8 Aug 2023 14:42:12 +0100
From: "Robert N. M. Watson"
Subject: UK electoral register hacked in August 2021 (The Guardian)
https://www.theguardian.com/technology/2023/aug/08/uk-electoral-commission-registers-targeted-by-hostile-hackers?CMP=Share_iOSApp_Other
------------------------------
Date: Wed, 9 Aug 2023 06:45:13 -0700
From: Victor Miller <
victorsmiller@gmail.com>
Subject: New acoustic attack steals data from keystrokes with 95%
accuracy (Bleeping Computer)
https://www.bleepingcomputer.com/news/security/new-acoustic-attack-steals-data-from-keystrokes-with-95-percent-accuracy/
------------------------------
Date: Fri, 11 Aug 2023 11:23:58 -0400 (EDT)
From: ACM TechNews <
technews-editor@acm.org>
Subject: Downfall Attacks on Intel CPUs Steal Encryption Keys, Data
(Ionut Ilascu)
Ionut Ilascu, *Bleeping Computer*, 8 Aug 2023
Google's Daniel Moghimi exploited the so-called "Downfall" bug in Intel
central processing units to steal passwords, encryption keys, and private
data from computers shared by multiple users. The transient execution side-channel vulnerability affects multiple Intel microprocessor lines, allowing hackers to exfiltrate Software Guard eXtensions-encrypted
information. Moghimi said Downfall attacks leverage the <i>gather</i> instruction that "leaks the content of the internal vector register file
during speculative execution." He developed the Gather Data Sampling exploit
to extract AES 128-bit and 256-bit cryptographic keys on a separate virtual machine from the controlled one, combining them to decrypt the information
in less than 10 seconds. Moghimi disclosed the flaw to Intel and worked with the company on a microcode update to address it.
------------------------------
Date: Tue, 1 Aug 2023 18:24:07 -0400
From: Monty Solomon <
monty@roscom.com>
Subject: California privacy regulatorœôòùs first case: Probing
Internet-connected cars (WashPost)
Data collection in cars has surged in recent years, especially in cars that encourage users to plug in their phones to play music, get spoken directions and make hands-free calls.
https://www.washingtonpost.com/technology/2023/07/31/cppa-privacy-car-data/
[If the Internet of Things has no appreciable trustworthiness, why should
we be surprised when cars are just IoT things! PGN]
------------------------------
Date: Fri, 11 Aug 2023 9:23:39 PDT
From: Peter Neumann <
neumann@csl.sri.com>
Subject: Hackers Stole $6M from Connecticut public school system
(Lola Fadulu)
Lola Fadulu, *The New York Times*, 11 Aug 2023
New Haven CT has stopped the use of electronic transfers (except
payrolls). $3.6M has been recovered. (PGN-ed)
------------------------------
Date: Fri, 11 Aug 2023 11:23:58 -0400 (EDT)
From: ACM TechNews <
technews-editor@acm.org>
Subject: VR Headsets Are Vulnerable to Hackers
(UC Riverside)
David Danelski, UC Riverside News, 8 Aug 2023
Computer scientists at the University of California, Riverside found hackers can translate the movements of virtual reality (VR) and augmented reality
(AR) headset users into words using spyware and artificial intelligence. In
one example, spyware used a headset user's motions to record their Facebook password as they air-typed it on a virtual keyboard. Spies also could potentially access a user's actions during virtual meetings involving confidential information by interpreting body movements. One exploit showed hackers retrieving a target's hand gestures, voice commands, and keystrokes
on a virtual keyboard with over 90% accuracy. Researchers also developed a system called TyPose that uses machine learning to extract AR/VR users' head motions to deduce words or characters they are typing.
------------------------------
Date: Sat, 15 Jul 2023 08:27:20 +0000
From: Bruce Schneier <
schneier@schneier.com>
Subject: Security and Human Behavior -- SHB 2023
For back issues, or to subscribe, visit Crypto-Gram's web page.
https://www.schneier.com/crypto-gram/ https://www.schneier.com/crypto-gram/archives/2023/0715.html
These same essays and news items appear in the Schneier on Security [
https://www.schneier.com/] blog, along with a lively and
intelligent comment section. An RSS feed is available.
[PGN-excerpted from Bruce Schneier's CRYPTO-GRAM, 15 Jul 2023, as both
timely and historically relevant to a topic that has been in RISKS
since the first issue.]
** SECURITY AND HUMAN BEHAVIOR (SHB) 2023
[2023.06.16] [
https://www.schneier.com/blog/archives/2023/06/security-and-human-behavior-shb-2023.html]
I'm just back from the sixteenth Workshop on Security and Human
Behavior [
https://www.heinz.cmu.edu/~acquisti/SHB2023/index.htm]
hosted by Alessandro Acquisti at Carnegie Mellon University in
Pittsburgh.
SHB is a small annual invitational workshop of people studying various
aspects of the human side of security, organized each year by
Alessandro Acquisti, Ross Anderson, and myself. The fifty or so
attendees include psychologists, economists, computer security
researchers, criminologists, sociologists, political scientists,
designers, lawyers, philosophers, anthropologists, geographers, neuroscientists, business-school professors, and a smattering of
others. It's not just an interdisciplinary event; most of the people
here are individually interdisciplinary.
Our goal is always to maximize discussion and interaction. We do that
by putting everyone on panels, and limiting talks to six to eight
minutes, with the rest of the time for open discussion. Short talks
limit presenters' ability to get into the boring details of their
work, and the interdisciplinary audience discourages jargon.
For the past decade and a half, this workshop has been the most
intellectually stimulating two days of my professional year. It
influences my thinking in different and sometimes surprising ways 00
and has resulted in some unexpected collaborations.
And that's what's valuable. One of the most important outcomes of the
event is new collaborations. Over the years, we have seen new interdisciplinary research between people who met at the workshop, and
ideas and methodologies move from one field into another based on
connections made at the workshop. This is why some of us have been
coming back every year for over a decade.
This year's schedule is here [
https://www.heinz.cmu.edu/~acquisti/SHB2023/program.htm]. This page [
https://www.heinz.cmu.edu/~acquisti/SHB2023/participants.htm] lists
the participants and includes links to some of their work. As he does
every year, Ross Anderson is live blogging [
https://www.lightbluetouchpaper.org/2023/06/14/security-and-human-behaviour-2023/]
the talks. We are back 100% in-person after two years of fully remote
and one year of hybrid.
Here are my posts on the first [
http://www.schneier.com/blog/archives/2008/06/security_and_hu.html], second [
http://www.schneier.com/blog/archives/2009/06/second_shb_work.html], third [
http://www.schneier.com/blog/archives/2010/06/third_shb_works.html], fourth [
http://www.schneier.com/blog/archives/2011/06/fourth_shb_work.html], fifth [
https://www.schneier.com/blog/archives/2012/06/security_and_hu_1.html], sixth [
https://www.schneier.com/blog/archives/2013/06/security_and_hu_2.html], seventh
[
https://www.schneier.com/blog/archives/2014/06/security_and_hu_3.html],
eighth
[
https://www.schneier.com/blog/archives/2015/06/security_and_hu_4.html], ninth [
https://www.schneier.com/blog/archives/2016/06/security_and_hu_5.html], tenth [
https://www.schneier.com/blog/archives/2017/05/security_and_hu_6.html], eleventh
[
https://www.schneier.com/blog/archives/2018/05/security_and_hu_7.html], twelfth
[
https://www.schneier.com/blog/archives/2019/06/security_and_hu_8.html], thirteenth
[
https://www.schneier.com/blog/archives/2020/06/security_and_hu_9.html], fourteenth [
https://www.schneier.com/blog/archives/2021/06/security-and-human-behavior-sh b-2021.html], and fifteenth [
https://www.schneier.com/blog/archives/2022/05/security-and-human-behavior-shb-2022.html]
SHB workshops. Follow those links to find summaries, papers, and
occasionally audio/video recordings of the sessions. Ross also
maintains a good webpage [
https://www.cl.cam.ac.uk/~rja14/psysec.html]
of psychology and security resources.
It's actually hard to believe that the workshop has been going on for
this long, and that it's still vibrant. We rotate [among] organizers,
so next year is my turn in Cambridge (the Massachusetts one).
------------------------------
Date: Mon, 17 Jul 2023 16:23:45 -0400
From: Monty Solomon <
monty@roscom.com>
Subject: Typo sends millions of U.S. military emails to Russian ally Mali
(BBC)
Emails intended for the U.S. military's ".mil" domain have, for years, been sent to the west African country which ends with the ".ml" suffix. Some of
the emails reportedly contained sensitive information such as passwords, medical records and the itineraries of top officers.
[...]
https://www.bbc.com/news/world-us-canada-66226873
------------------------------
Date: Mon, 17 Jul 2023 13:31:56 -0700
From: Lauren Weinstein <
lauren@vortex.com>
Subject: Bots and Spam attack Meta's Threads (TechCrunch)
https://techcrunch.com/2023/07/17/the-spam-bots-have-now-found-threads-as-company-announces-its-own-rate-limits/
------------------------------
Date: Sun, 16 Jul 2023 08:03:45 +0200
From: Anthony Thorn <
anthony.thorn@atss.ch>
Subject: Facebook sent information on visitors to police *anonymous
reporting* site (The Guardian)
``Britainœôòùs biggest police force gathered sensitive data about people using its website to report sexual offences, domestic abuse and other crimes and shared it with Facebook for targeted advertising, the Observer has found.''
https://www.theguardian.com/uk-news/2023/jul/15/revealed-metropolitan-police-shared-sensitive-data-about-victims-with-facebook
Facebook's Pixel tool was embedded in Metropolitan Police web page
The data was collected by a tracking tool embedded in the website of the Metropolitan police and included records of browsing activity about people using a *secure* online form for victims and witnesses to report offences.
In one case, Facebook received a parcel of data when someone clicked a link
to œôòüsecurely and confidentially report rape or sexual assaultœôòý to the Met online. This included the sexual nature of the offence being reported, the
time the page was viewed and a code denoting the personœôòùs Facebook account ID.
The tracking tool, known as Meta Pixel, also sent details to Facebook about content viewed and buttons clicked on webpages linked to contacting police, accessing victim services, and advice pages for crimes including rape, assaults, stalking and fraud."
What was the person who installed the tool thinking?
We must assume that almost(?) every web site reports our activity to
Facebook and Google.
I guess it's time for Tor.
------------------------------
Date: Tue, 18 Jul 2023 08:54:41 -0400
From: Monty Solomon <
monty@roscom.com>
Subject: Tech companies acknowledge machine-learning algorithms can
perpetuate discrimination and need improvement. (NYTimes)
https://www.nytimes.com/2023/07/04/arts/design/black-artists-bias-ai.html
------------------------------
Date: Tue, 18 Jul 2023 08:38:49 -0400
From: Monty Solomon <
monty@roscom.com>
Subject: Wikipedia's Moment of Truth? (NYTimes)
Can the online encyclopedia help teach A.I. chatbots to get their facts
right -œôòô without destroying itself in the process?
https://www.nytimes.com/2023/07/18/magazine/wikipedia-ai-chatgpt.html
------------------------------
Date: Tue, 18 Jul 2023 12:22:52 -0700
From: geoff goodfellow <
geoff@iconia.com>
Subject: Why AI detectors think the U.S. Constitution was
written by AI (Ars Technica)
If you feed America's most important legal document -- the US Constitution
<
https://arstechnica.com/information-technology/2022/12/openai-invites-everyone-to-test-new-ai-powered-chatbot-with-amusing-results/>,
it will tell you that the document was almost certainly written by AI. But unless James Madison was a time traveler, that can't be the case. Why do AI writing detection tools give false positives? We spoke to several experts -- and the creator of AI writing detector GPTZero -- to find out.
Among news stories of overzealous professors <
https://www.washingtonpost.com/technology/2023/05/18/texas-professor-threatened-fail-class-chatgpt-cheating/>
flunking an entire class due to the suspicion of AI writing tool use and
kids falsely accused <
https://www.reddit.com/r/ChatGPT/comments/132ikw3/teacher_accused_me_of_using_chatgpt/> of using ChatGPT, generative AI has education in a tizzy. Some
think it represents an existential crisis
https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essays/672371/>. Teachers relying on educational methods
developed over the past century have been scrambling for ways to keep <
https://www.reddit.com/r/Teachers/comments/zkguxg/my_frustrations_with_chatgpt/>
the status quo -- the tradition of relying on the essay as a tool to gauge~> student mastery of a topic.
As tempting as it is to rely on AI tools to detect AI-generated writing, evidence so far has shown that they are not reliable <
https://techcrunch.com/2023/02/16/most-sites-claiming-to-catch-ai-written-text-fail-spectacularly/>.
Due to false positives, AI writing detectors such as GPTZero <
https://gptzero.me/>, ZeroGPT <
https://www.zerogpt.com/>, and OpenAI's Text Classifier <
https://platform.openai.com/ai-text-classifier> cannot <
https://theconversation.com/we-pitted-chatgpt-against-tools-for-detecting-ai-written-text-and-the-results-are-troubling-199774>
be trusted to detect text composed by large language models (LLMs) like ChatGPT.
If you feed GPTZero a section of the US Constitution, it says the text is ``likely to be written entirely by AI.'' Several times over the past six months, screenshots of other AI detectors showing similar results have gone viral <
https://twitter.com/0xgaut/status/1648383977139363841?s=20> on
social media, inspiring confusion and plenty of jokes about the founding fathers being robots. It turns out the same thing happens with selections
from The Bible, which also show up as being AI-generated.
To explain why these tools make such obvious mistakes (and otherwise often return false positives), we first need to understand how they work.
*Understanding the concepts behind AI detection*. [...]
https://arstechnica.com/information-technology/2023/07/why-ai-detectors-think-the-us-constitution-was-written-by-ai/
------------------------------
Date: Fri, 21 Jul 2023 11:44:53 -0400 (EDT)
From: ACM TechNews <
technews-editor@acm.org>
Subject: ChatGPT's Accuracy Has Gotten Worse (Andrew Paul)
Andrew Paul, *Popular Science*, 19 Jul 2023, via ACM TechNews,
Stanford University and University of Southern California, Berkeley
(UC Berkeley) researchers demonstrated an apparent decline in the
reliability of OpenAI's ChatGPT large language model (LLM) over time
without any solid explanation. The researchers assessed the chatbot's
tendency to offer answers with varying degrees of accuracy and
quality, as well as how appropriately it follows instructions. In one
example, the researchers observed that GPT-4's nearly 98% accuracy in identifying prime numbers fell to less than 3% between March and June
2023, while GPT-3.5's accuracy increased; both GPT-3.5 and GPT-4's code-generation abilities worsened in that same interval. UC
Berkeley's Matei Zaharia suggested the decline may reflect a limit
reached by reinforcement learning from human feedback, or perhaps bugs
in the system.
------------------------------
Date: Tue, 18 Jul 2023 08:49:59 -0400
From: Monty Solomon <
monty@roscom.com>
Subject: In the Age of AI, Techœôòùs Little Guys Need Big Friends (NYTimes)
Creating a new AI system requires lots of money and lots of computing power, which is controlled by the industryœôòùs giants.
https://www.nytimes.com/2023/07/05/business/artificial-intelligence-power-data-centers.html
---------------------------------------
Date: Fri, 21 Jul 2023 18:12:10 -0700
From: Lauren Weinstein <
lauren@vortex.com>
Subject: OpenAI's trust and safety lead is leaving the company
(Engadget)
https://www.engadget.com/openais-trust-and-safety-lead-is-leaving-the-company-190049987.html?src=rss
[He could not safely trust it? PGN]
------------------------------
Date: Mon, 24 Jul 2023 11:54:55 -0400 (EDT)
From: ACM TechNews <
technews-editor@acm.org>
Subject: AI That Teaches Other AI (Greg Hardesty)
Greg Hardesty, USC Viterbi School of Engineering, 18 Jul 2023,
via ACM TechNews
Scientists at the University of Southern California (USC), Intel Labs, and
the Chinese Academy of Sciences demonstrated that robots can be trained to train other robots by sharing their knowledge. The researchers developed the Shared Knowledge Lifelong Learning (SKILL) tool to teach artificial intelligence agents 102 unique tasks whose knowledge they then shared over a decentralized communication network. The researchers said they found the SKILLtool's algorithms speed up the learning process by allowing agents to learn concurrently in parallel. The work indicated learning time shrinks by
a factor of 101.5 when 102 agents each learn one task and then share.
[Speed is irrelevant if there are any flaws or vulnerabilities in the
process. This is a a classic example of a serpent eating its own tail --
ouroboros, which eventually is nonconvergent to a sound state. PGN]
------------------------------
Date: Wed, 26 Jul 2023 11:46:32 -0400 (EDT)
From: ACM TechNews <
technews-editor@acm.org>
Subject: Researchers Find Deliberate Backdoor in Police Radio Encryption
Algorithm (Kim Zetter)
Kim Zetter, *Ars Technica*, 25 Jul 2023
Researchers with Netherlands-based security consultancy Midnight Blue
have uncovered a secret backdoor in technology long used for critical
data and voice radio communications worldwide. The backdoor resides in
an algorithm embedded within commercially sold devices that transmit
encrypted data and commands, allowing users to eavesdrop on
communications and potentially hijack critical infrastructure. The
researchers found the backdoor and four other flaws in the European Telecommunications Standards Institute's Terrestrial Trunked Radio
(TETRA) standard in 2021, but waited until radio manufacturers could
develop patches and mitigations before disclosing them. The
researchers also learned most police forces worldwide (excluding the
U.S.) use TETRA-based radio technology.
------------------------------
Date: Fri, 28 Jul 2023 11:05:55 -0400 (EDT)
From: ACM TechNews <
technews-editor@acm.org>
Subject: Researchers Poke Holes in Safety Controls of ChatGPT, Other
Chatbots (Cade Metz)
Cade Metz, *The New York Times*, 27 Jul 2023, via,ACM TechNews
Scientists at Carnegie Mellon University and the Center for AI Safety demonstrated the ability to produce nearly infinite volumes of
destructive information by bypassing artificial intelligence (AI)
protections in any leading chatbot. The researchers found they could
exploit open source systems by appending a long suffix of characters
onto each English-language prompt inputted into the system. In this
manner, they were able to persuade chatbots to provide harmful
information and generate discriminatory, counterfeit, and otherwise
toxic data. The researchers found they could use this method to
circumvent the safeguards of OpenAI's ChatGPT, Google's Bard, and
Anthropic's Claude chatbots. While they concede that an obvious
countermeasure for preventing all such attacks does not exist, the
researchers suggest chatbot developers could block the suffixes they identified.
------------------------------
Date: Mon, 7 Aug 2023 13:51:39 -0400 (EDT)
From: ACM TechNews <
technews-editor@acm.org>
Subject: Unpatchable AMD Chip Flaw Unlocks Paid Tesla Feature Upgrade
(Brandon Hill)
Brandon Hill, *Tom's Hardware*, 3 Aug 2023
Security researchers at Germany's Technical University of Berlin have
cracked modern Tesla vehicles' Media Control Unit (MCU) to access paid
features through an unpatchable flaw in the MCU-controlling AMD
processor. The researchers said they launched a voltage fault injection
attack against the third-generation MCU-Z's Platform Security Processor, allowing the decryption of objects stored in the Trusted Platform
Module. They explained, "Our gained root permissions enable arbitrary
changes to Linux that survive reboots and update. They allow an attacker to decrypt the encrypted NVMe [Non-Volatile Memory Express] storage and access private user data such as the phonebook, calendar entries, etc." The researchers found hackers can access Tesla subsystems and even
paywall-locked optional content via the exploit.
------------------------------
Date: Mon, 7 Aug 2023 13:51:39 -0400 (EDT)
From: ACM TechNews <
technews-editor@acm.org>
Subject: Eight-Months Pregnant Woman Arrested After False Facial
Recognition Match (Kashmir Hill)
Kashmir Hill, *The New York Times*, 6 Aug 2023
Detroit police recently arrested eight-months-pregnant African American
Porcha Woodruff for robbery and carjacking due to an erroneous offender
match by facial recognition technology. Woodruff is the sixth person to
report being wrongly accused of a crime through such a mismatch and the
third such wrongful arrest involving the Detroit Police Department. City documents indicated the department uses a facial recognition vendor called DataWorks Plus to run unknown faces against a database of mug shots,
returning matches ranked according to the probability of being the same
person. Crime analysts decide if any matches are potential suspects, and the police report said a match for Woodruff's 2015 mug shot -- which she said
was from an arrest for driving with an expired license -- prompted the
analyst to give her name to the investigator.
[Maybe the probability was something like 5%, but more than anyone else in
the database? PGN]
------------------------------
Date: Fri, 21 Jul 2023 11:44:53 -0400 (EDT)
From: ACM TechNews <
technews-editor@acm.org>
Subject: MIT Makes Probability-Based Computing a Bit Brighter
(IEEE Spectrum)
Edd Gent and Margo Anderson. *IEEE Spectrum*,19 Jul 2023
via ACM TechNews,
Massachusetts Institute of Technology (MIT) researchers have produced the
first probabilistic bit (p-bit) using photonics. The method's core component
is an optical parametric oscillator (OPO), which is basically two mirrors reflecting light back and forth between them. The researchers can influence
the likelihood with which an oscillation's phase assumes a particular state
by injecting the OPO with extremely weak laser pulses. MIT's Charles Roques-Carmes explained, "We can keep the random aspect that just comes from using quantum physics, but in a way that we can control the probability distribution that is generated by those quantum variables." The researchers said they were able to generate 10,000 p-bits per second, which appear to support the necessary behavior for building a probabilistic computer.
------------------------------
Date: Tue, 18 Jul 2023 08:38:49 -0400
From: Monty Solomon <
monty@roscom.com>
Subject: Wikipediaœôòùs Moment of Truth (NYTimes)
Can the online encyclopedia help teach AI chatbots to get their facts right -œôòô without destroying itself in the process?
https://www.nytimes.com/2023/07/18/magazine/wikipedia-ai-chatgpt.html
------------------------------
Date: Thu, 20 Jul 2023 22:01:00 -0400
From: Bob Smith <
bsmith@sudleyplace.com>
Subject: Possible Typo Leads to Actual Scam
I encountered an error message from my Frigidaire PCFI3668AF induction
range, which thankfully resolved itself. But, before that resolution, I
called Frigidaire tech support for help, and as I was right there at the
oven, I lazily used the phone number printed on the tag inside the oven: 800-374-4472. The phone was answered as if it were Frigidaire tech support, but actually was a scam where they wanted to "give" me a $100 debit card if only I would cover the USPS handling charge of $2.95; yes, that noise you
hear is the sound of your alarm bells. At this point, I hung up.
Subsequently, I looked up the actual Frigidaire tech support number online
and found it to be 800-374-4432, not -4472, so perhaps the number printed inside the oven is a typo; nonetheless, it leads to an actual scam with an unusual set-up.
I was puzzled that someone thought it worthwhile to capitalize on this tiny mistake, as who even *reads* the printed tag inside an oven, no less *calls* the printed phone number? On the other hand, once the typo is noticed by a scammer, the cost to set up and manage the scam is negligible, which seems
to be the case.
I sent all this along with a photo of the printed tag to Frigidaire tech support in May, but have not heard back.
This is such a rare and unusual basis for a scam that I'm unsure of what is
the takeaway lesson, if any.
------------------------------
Date: Thu, 20 Jul 2023 15:27:05 +0000
From: Henry Baker <
hbaker1@pipeline.com>
Subject: 'Redacted Redactions' Strike Again
I'd like to coin the neologism "outcroppings" for these redacted redactions.
"An outcropping is rock formation, a place on the earth where
the bedrock underneath shows through."
Perhaps 'natural emergence' has come a cropper ?
Oops!
https://theintercept.com/2023/07/12/covid-documents-house-republicans/
HOUSE REPUBLICANS ACCIDENTALLY RELEASED A TROVE OF
DAMNING COVID DOCUMENTS
"According to the metadata in the PDF of the report, it was created
using 'Acrobat PDFMaker 23 for Word,' indicating that the report was
originally drafted as a Word document. Word, however, retains the
original image when an image is cropped, as do many other apps.
Microsoft's documentation cautions that 'Cropped parts of the picture
are not removed from the file, and can potentially be seen by others,'
going on to note: 'If there is sensitive information in the area you're cropping out make sure you delete the cropped areas.'
"When this Word document was converted to a PDF, the original,
uncropped images were likewise carried over. The Intercept was able to
extract the original, complete images from the PDF using freely
available tools..."
------------------------------
Date: Mon, 17 Jul 2023 21:37:57 +0800
From: George Neville-Neil <
gnn@neville-neil.com>
Subject: Re: Defective train safety controls lead to bus rides for South
Auckland commuters (Hinson. RISKS-33.76)
WRT the *Defective train safety controls*, it would seem that the controls
in question are the locomotive engineers. Reviewing the rolling stock and locomotives used on the KiwiRail service shows them to be common diesel locomotives, without any form of positive train control or automatic braking that would stop the train when it runs a red signal. The issue is operator error, rather than a fault in an automated system. It's a risk, but it's an old one, and not, it would seem, due to an automated system.
------------------------------
Date: Sun, 16 Jul 2023 16:51:29 +0000
From: Henry Baker <
hbaker1@pipeline.com>
Subject: Re: Myth about innovation ... (RISKS-33.75)
Be very careful what you wish for.
While I have also criticized OceanGate in this forum, I'm not about ready to throw out the innovation baby with the (ocean?) bathwater.
*Innovation* is *novelty* + *usefulness* + *cost-effectiveness* + *right-timing*.
Exhibit #1 is Apple, as they have continually reshaped the
computer world from personal computers to desktop publishing
to smartphones to digital cameras. At each step, the 'market
research' said that no one was interested in these devices; the
reason, of course, is that no one had had access to such devices,
so the intuition of the market research subjects was wrong.
There's a curious relationship between 'fake news' and
innovation: *the experts are always wrong*, because *experts*
(by definition) are *backwards-looking*. You can't be an expert
in a field that doesn't exist yet. Yet the 'expert' is the first to
call every new idea 'fake news'.
[continued in next message]
--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)