• Risks Digest 34.56

    From RISKS List Owner@21:1/5 to All on Sun Feb 16 20:29:05 2025
    RISKS-LIST: Risks-Forum Digest Sunday 16 Feb 2025 Volume 34 : Issue 56

    ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks) Peter G. Neumann, founder and still moderator

    ***** See last item for further information, disclaimers, caveats, etc. ***** This issue is archived at <http://www.risks.org> as
    <http://catless.ncl.ac.uk/Risks/34.56>
    The current issue can also be found at
    <http://www.csl.sri.com/users/risko/risks.txt>

    Contents:
    UK Kicks Apple's Door Open for China (WSJ)
    Trump firings cause chaos at agency responsible for America's nuclear
    weapons (NPR)
    Lies, Damned Lies and Trumpflation (Paul Krugman)
    Government Tech Workers Forced to Defend Projects to Random Elon Musk Bros
    (WiReD)
    The Government's Computing Experts Say They're Terrified (The Atlantic)
    AI chatbots unable to accurately summarise news (BBC)
    AI can now replicate itself -- a milestone that has experts terrified
    (Space)
    Ex-Google boss fears AI could be used by terrorists (BBC)
    Dear, did you say pastry? meet the AI granny driving scammers up the wall
    (The Guardian)
    DeepSeek redefines who'll control AI (David Wamsley, Susmit Jha)
    Canadian residents are racing to save the data in Trump's crosshairs (CBC) Hiding the Fatal Motor Vehicle Crash Record (data-science)
    Government Accountability Office report on IT challenges (PGN)
    No squirrels? Monkeys will do! (BBC)
    ChatGPT may not be as power-hungry as once assumed (techcrunch)
    Hollywood writers say AI is ripping off their work. They want studios
    to sue (Steve Bacher)
    Re: UK slaps Technical Capacity Notice on Apple requiring Law
    Enforcement access to encrypted cloud data (Julian Bradfield)
    Abridged info on RISKS (comp.risks)

    ----------------------------------------------------------------------

    Date: Tue, 11 Feb 2025 11:22:05 PST
    From: Peter Neumann <neumann@csl.sri.com>
    Subject: UK Kicks Apple's Door Open for China (WSJ)

    [Here's a very nice follow-up to
    UK slaps Technical Capacity Notice on Apple requiring Law
    Enforcement access to encrypted cloud data (WashPost)
    RISKS-34.55. PGN]

    Matt Green and Alex Stamos, WSJ, UK Kicks Apple's Door Open for China, WSJ, https://www.wsj.com/opinion/u-k-kicks-apples-door-open-for-china-encryption-data-protection-deb4bc2b

    The UK has ordered Apple <https://www.wsj.com/market-data/quotes/AAPL> to
    build a backdoor that would allow the British government to download and
    read the private encrypted data of any iPhone user anywhere in the
    world. This would be a massive downgrade in the security features that
    protect the privacy of billions of people and that made Apple one of the world's most valuable companies.

    Congress must immediately enact a law prohibiting American tech companies
    from providing encryption backdoors to any country. This would create a *conflict of laws* situation, allowing Apple to fight this order in UK
    courts and protect Americans' safety and security. The UK government’s demand comes at a peak of global cyber conflict. Hackers from Russia
    continue to run roughshod over businesses, demanding millions of dollars in ransom to return access to computers and data. The Chinese Ministry of
    State Security successfully hacked most major U.S. telecom providers and the U.S. Treasury. They even targeted Mr. Trump and the Kamala Harris <https://www.wsj.com/topics/person/kamala-harris> campaign. Following these attacks on our national security, the Federal Bureau of Investigation
    reversed its hostility toward end-to-end encryption and recommended that Americans use encrypted message applications to protect themselves against foreign adversaries.

    The UK law, colloquially known as the *snooper's charter*, grants the
    British government unprecedented power to compel tech companies to
    weaken the security of the devices Americans use every day. Other
    countries have attempted to regulate encryption in ways that would
    compromise the security of users worldwide, but the major U.S. tech
    companies have refused to build features for either democratic or
    autocratic governments that would make encryption worthless to
    consumers.

    This order from the UK threatens to blow a hole in that stance, and
    not only for Apple. The strength of end-to-end encryption comes from
    the idea that security is based on math, not politics. Apple designed
    iCloud with an *advanced data protection* mode that makes data
    impossible for anyone but the user to retrieve. Google does the same
    for Android backups, while WhatsApp, Signal and Apple Messages provide
    similar security for chats. Yet once one country demands an exception
    to encryption, the decision about who can access data becomes
    political. To Apple, China is much more important than the UK; it's a
    much larger market and the place where most Apple devices are
    manufactured. If the British crack the encryption door an inch, the
    Chinese will kick it open.

    ------------------------------

    Date: Sat, 15 Feb 2025 01:55:14 -0500
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Trump firings cause chaos at agency responsible for America's
    nuclear weapons (NPR)

    Scenes of confusion and chaos unfolded over the last two days at the
    civilian agency that oversees the nation's nuclear weapons stockpile, as the Trump administration's mass firings were carried out before being "paused"
    on Friday. [...]

    Officials were given hours to fire hundreds of employees, and workers were
    shut out of email as termination notices arrived. The terminations were part
    of a broader group of dismissals at the Department of Energy, where
    reportedly more than a thousand federal workers were terminated. It was all
    a result of Elon Musk's Department of Government Efficiency (DOGE)
    initiative to slash the federal workforce and what Musk and President Trump characterize as excessive government spending.

    The NNSA is a semi-autonomous agency within the Department of Energy that oversees the U.S. stockpile of thousands of nuclear weapons. Despite having the words "National" and "Security" in its title, it was not getting an exemption for national security, managers at the agency were told last
    Friday, according to an employee at NNSA who asked not to be named, fearing retribution from the Trump administration. Just days before, officials in leadership had scrambled to write descriptions for the roughly 300
    probationary employees at the agency who had joined the federal workforce
    less than two years ago.

    U.S. Centers for Disease Control and Prevention
    Public Health
    Staff at CDC and NIH are reeling as Trump administration cuts workforce

    Managers were given just 200 characters to explain why the jobs these
    workers did mattered. [...]

    On Friday, an employee still at NNSA told NPR that the firings are now "paused," in part because of the chaotic way in which they unfolded.
    Another employee had been contacted and told that their termination had been "rescinded." But some worried the damage had already been done. Nuclear security is highly specialized, high-pressure work, but it's not
    particularly well paid, one employee told NPR. Given what's unfolded over
    the past 24 hours, "why would anybody want to take these jobs?" they asked.

    https://www.npr.org/2025/02/14/nx-s1-5298190/nuclear-agency-trump-firings-nnsa

    ------------------------------

    Date: Fri, 14 Feb 2025 17:23:18 -0500
    From: Gabe Goldberg <gabe@gabegold.com>
    Subject: Lies, Damned Lies and Trumpflation (Paul Krugman)

    Paul Krugman, *The New York Times*

    More DOGE hijinks: In yesterday's post [13 Feb 2025] I noted that the whole condoms-for-Hamas thing came from DOGE staffers who confused Gaza province
    in Mozambique with the Gaza Strip. Well, as one commenter pointed out, the thing about 150-year-old Social Security beneficiaries may be another
    comical error. Apparently in COBOL — obsolete in the business world but
    still used in government — a missing date of birth is registered as
    1875. Commenters on X and Threads say the same. So the only “fraud” here is the pretense that Musk's child programmers have any idea what they’re doing.

    https://paulkrugman.substack.com/p/lies-damned-lies-and-trumpflation

    The risk? A Pulitzer Prize winning economist writing about technology.

    [What's wrong with that? He has lots of fact-checkers, and he is often
    right on the button. Fortunately, he seems to have learned a lot along
    the way to his Pulitzer. PGN]

    ------------------------------

    Date: Wed, 12 Feb 2025 16:00:25 -0800
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: Government Tech Workers Forced to Defend Projects to Random Elon
    Musk Bros (WiReD)

    More info on the guy Musk brought in to interview them.  Not much about him was known or said in the article, but he turns out to be even more of a
    piece of work than they thought.

    ------------------------------

    Date: Sun, 9 Feb 2025 08:06:33 -0500
    From: Jan Wolitzky <jan.wolitzky@gmail.com>
    Subject: The Government's Computing Experts Say They're Terrified
    (The Atlantic)

    Elon Musk's unceasing attempts to access the data and information systems
    of the federal government range so widely, and are so unprecedented and unpredictable, that government computing experts believe the effort has
    spun out of control. This week, we spoke with four federal-government IT professionals -- all experienced contractors and civil servants who have
    built, modified, or maintained the kind of technological infrastructure that Musk's inexperienced employees at his newly created Department of Government Efficiency are attempting to access. In our conversations, each expert was unequivocal: They are terrified and struggling to articulate the scale of
    the crisis.

    https://www.theatlantic.com/technology/archive/2025/02/elon-musk-doge-security/681600/

    ------------------------------

    Date: Tue, 11 Feb 2025 06:45:19 -0700
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: AI chatbots unable to accurately summarise news (BBC)

    https://www.bbc.com/news/articles/c0m17d8827ko

    Four major artificial intelligence (AI) chatbots are inaccurately
    summarising news stories, according to research carried out by the BBC.

    The BBC gave OpenAI's ChatGPT, Microsoft's Copilot, Google's Gemini and Perplexity AI content from the BBC website then asked them questions about
    the news.

    It said the resulting answers contained "significant inaccuracies" and distortions.

    In a blog, Deborah Turness, the CEO of BBC News and Current Affairs, said
    AI brought "endless opportunities" but the companies developing the tools
    were "playing with fire".

    ------------------------------

    Date: Thu, 13 Feb 2025 08:12:01 -0800
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: AI can now replicate itself -- a milestone that has experts terrified
    (Space)

    Scientists say artificial intelligence (AI) has crossed a critical "red
    line" and has replicated itself. In a new study, researchers from China
    showed that two popular large language models (LLMs) could clone themselves.

    https://www.space.com/space-exploration/tech/ai-can-now-replicate-itself-a-milestone-that-has-experts-terrified

    [Same old issue: If it is replicating itself, it is replicating all its
    mistakes. That's not an improvement. PGN]

    ------------------------------

    Date: Wed, 12 Feb 2025 20:05:11 -0700
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: Ex-Google boss fears AI could be used by terrorists (BBC)

    https://www.bbc.com/news/articles/c5y6eq2zxlno

    The former chief executive of Google is worried artificial intelligence
    could be used by terrorists or "rogue states" to "harm innocent people."

    Eric Schmidt told the BBC: "The real fears that I have are not the ones that most people talk about AI -- I talk about extreme risk."

    The tech billionaire, who held senior posts at Google from 2001 to 2017,
    told the Today programme "North Korea, or Iran, or even Russia" could adopt
    and misuse the technology to create biological weapons.

    He called for government oversight on private tech companies which are developing AI models, but warned over-regulation could stifle innovation.

    ------------------------------

    Date: Mon, 10 Feb 2025 06:03:22 -0500
    From: Jan Wolitzky <jan.wolitzky@gmail.com>
    Subject: Dear, did you say pastry? meet the AI granny driving scammers
    up the wall (The Guardian)

    An elderly grandmother who chats about knitting patterns, recipes for
    scones and the blackness of the night sky to anyone who will listen has
    become an unlikely tool in combatting scammers.

    Like many people, Daisy is beset with countless calls from fraudsters, who often try to take control of her computer after claiming she has been
    hacked.

    But because of her dithering and inquiries about whether they like cups of
    tea, the criminals end up furious and frustrated rather than successful.

    Daisy is, of course, not a real grandmother but an AI bot created by
    computer scientists to combat fraud. Her task is simply to waste the time of the people who are trying to scam her.

    https://www.theguardian.com/money/2025/feb/04/ai-granny-scammers-phone-fraud

    ------------------------------

    Date: Wed, 12 Feb 2025 11:38:20 PST
    From: Peter Neumann <neumann@csl.sri.com>
    Subject: DeepSeek redefines who'll control AI

    David Wamsley, *San Francisco Chronicle*, 12 Feb 2025

    DeepSeek R1 delivers ChatGPD-4-level performance at a fraction of the
    cost, erasing a key assumption of AI progress.

    Excerpts

    For years the prevailing wisdom in Washington and Silicon Valley
    rested on an article of faith: that the United States held a
    commanding lead in AI. The narrative was compelling -- truly advanced
    AI would remain the exclusive domain of well-funded American companies
    with their proprietary data and vast computing resources.

    DeepSeek didn't just challenge that narrative -- it shattered it.

    We've crossed a threshold. The old playbooks -- whether for business
    strategy, national policy, or career planning -- are obsolete
    overnight. The future we've been preparing for isn't coming next
    decade, next year, or even next quarter.

    It's already here.

    [But does it resolve all of the integrity and privacy issues? Also,
    does it enhance Evidence-based Research? PGN]

    ------------------------------

    Date: Wed, 12 Feb 2025 20:37:05 +0000
    From: Susmit Jha <susmit.jha@sri.com>
    Subject: DeepSeek redefines who'll control AI

    The claims of DeepSeekR1 trained on 6M is propaganda -– DeepSeekR1 spent billions to do this, which is a consensus in the AI community. If you look around, several blogs will do cost breakdown. From their own paper, they had 2048 H800s where each H800 cost has varied between 22K to 35K. For a
    typical datacenter to train ML models, these 60M GPUs require costly
    internode interconnects, NVMe high-speed disks, etc. The capital cost of
    their own declared infra is in the same ballpark. Running these requires further cost.

    The salary offered by deepseek to its engineers in China was 1.3 million dollars. So, you can estimate human resources cost.

    In contrast, we offer around 150K (10x lower) salary to our starting
    folks. We do not have even 1 GPU comparable to their 2048 H800.

    It appears their claim of using outcome only reward model without SFT to get reasoning which conflicts with several papers requiring PRM is a consequence
    of either natural proliferation of chain of thought responses from SOTA LLMs over the web, or an engineered implicit distillation of CoT from openai and anthropic.

    Their claim to open-source is even more ill-founded – if someone can find their training code and data, please share. They just shared weights and inference code. I can’t imagine why someone would call that open-source and not open-weight as Meta and others call their models.

    See this for a good analysis: https://semianalysis.com/2025/01/31/deepseek-debates/

    We should kill this propaganda and not let it spread – this could be a misinformation campaign from a near peer adversary to make us think
    investments in AI is not needed or that US is more wasteful with AI
    investments – both of which is flatly completely wrong.

    ------------------------------

    Date: Thu, 13 Feb 2025 06:51:15 -0700
    From: Matthew Kruk <mkrukg@gmail.com>
    Subject: Canadian residents are racing to save the data in Trump's crosshairs
    (CBC)

    https://www.cbc.ca/news/politics/canada-us-medical-environmental-data-1.745= 7627

    The call to Angela Rasmussen came out of the blue and posed a troubling question. Had she heard the rumour that key data sets would be removed from
    the U.S. Centers for Disease Control and Prevention's website the next day?

    It's something Rasmussen had thought could never happen.

    "It had never really been thought of before that CDC would actually start deleting some of these crucial public health data sets," said the
    University of Saskatchewan virologist. "These data are really, really
    important for everybody's health -- not just in the U.S. but around
    the world."

    The following day, Jan. 31, Rasmussen started to see data disappear. She
    knew she needed to take action.

    ------------------------------

    Date: Mon, 10 Feb 2025 14:53:26 -0500 (EST)
    From: "R. A. Whitfield" <inquiry@data-science.llc>
    Subject: Hiding the Fatal Motor Vehicle Crash Record

    Fatality Analysis Reporting System Vandalized by Officials at NHTSA
    R.A. Whitfield, Manager, Forensic Data Science LLC

    The most recently available Fatality Analysis Reporting System (FARS) data
    for the 2022 calendar year were removed from the National Highway Traffic Safety Administration's (NHTSA's) File Downloads during the first week of February 2022. The official explanation for this action was that the data
    did not conform to President Trump's "Executive Order Defending Women from Gender Ideology Extremism and Restoring Biological Truth to the Federal Government."

    According to a communication from NHTSA, these data and their associated documentation "... will be reposted once it is following this Executive
    Order [sic]."

    It is absurd to suggest that women's lives are being "defended" by hiding detailed information in FARS about more than eleven thousand of them who
    were killed in motor vehicle crashes in the United States in 2022. If any
    good can follow from such an enormous loss of life, it will come about by studying the data to learn how similar casualties can be prevented in the future. That can't be done if the data have been suppressed.

    The connection of the FARS data with "Gender Ideology" is remote. Precisely
    one person was killed in 2022 whose "SEXNAME" was coded in the FARS data as "Other" -- instead of "Male" or "Female" -- according to a copy of the data that was fortunately saved from the bonfire. Twenty-one "Other" persons were involved in fatal crashes but were not themselves killed.

    It is cold comfort that the data for the remaining 95,735 persons in fatal crashes in 2022 might someday be reposted by NHTSA once the faithfulness to
    the "biological truth" of these twenty-two persons' sex has been determined
    and restored.

    Efforts to hide the motor vehicle crash record are anti-scientific and ought
    to concern manufacturers and consumers alike. In particular, concealing the most current FARS data will impede progress toward achieving whatever safety benefits advanced driver-assistance systems might bring. Access to data
    about fatal motor vehicle crashes is a crucial tool that can be used by researchers to shed light on the safety risks of "self-driving" technology.

    Who could be opposed to this?

    http://data-science.llc/fars2022.html

    ------------------------------

    Date: Tue, 11 Feb 2025 13:09:38 PST
    From: Peter Neumann <neumann@csl.sri.com>
    Subject: Government Accountability Office report on IT challenges

    [Once upon a time (for nearly twenty years), I was on the GAO Executive
    Council on Information Management and Technology, with Gene Spafford
    joining a little later than I did. The GAO did an enormous job in writing
    incisive reports on critical government agencies and Congress, while
    trying to be objective -- although they sometimes had to lean toward cater
    a little to the party that asked for the research. Amazingly, it is still
    active. This is a voice that needs to survive and be heard today. PGN]

    GAO Calls for Urgent Action to Address IT Acquisition and Management Challenges, GAO, 23 Jan 2025

    The U.S. Government Accountability Office (GAO) today issued a report
    updating its IT acquisitions and operations high-risk area. In this update,
    GAO identified major challenges to federal IT acquisitions and management,
    as well as critical actions the government needs to take to implement
    effective and cost-efficient mission-critical IT systems and operations.

    ------------------------------

    Date: Sun, 9 Feb 2025 09:15:29 -0800
    From: "Jim" <jgeissman@socal.rr.com>
    Subject: No squirrels? Monkeys will do! (BBC)

    Power is being gradually restored across Sri Lanka after a nationwide outage left buildings including hospitals having to rely on generators.

    Officials say it may take a few hours to get power back across the island nation, but medical facilities and water purification plants have been given priority.

    Energy Minister Kumara Jayakody reportedly blamed a monkey for causing the power cut, saying the animal came into "contact with our grid transformer causing an imbalance in the system", according to the AFP news agency.

    The Ceylon Electricity Board (CEB) said the power cut had been caused by an emergency at a sub-station, south of Colombo, and gave no further details.

    "Engineers are attending to it to try and restore the service as soon as possible," the minister said.

    The CEB said "we are making every effort to restore the island-wide power failure as soon as possible".

    Hospitals and businesses across the island nation of 22 million people have been using generators or inverters.

    ------------------------------

    Date: Sat, 15 Feb 2025 07:51:26 -0800
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: ChatGPT may not be as power-hungry as once assumed

    ChatGPT, OpenAI’s chatbot platform, may not be as power-hungry as once assumed. But its appetite largely depends on how ChatGPT is being used and
    the AI models that are answering the queries, according to a new study.

    A recent analysis by Epoch AI, a nonprofit AI research institute, attempted
    to calculate how much energy a typical ChatGPT query consumes. A commonly cited stat is that ChatGPT requires around 3 watt-hours of power to answer a single question, or 10 times as much as a Google search.

    Epoch believes that’s an overestimate.

    Using OpenAI’s latest default model for ChatGPT, GPT-4o, as a reference, Epoch found the average ChatGPT query consumes around 0.3 watt-hours — less than many household appliances.

    “The energy use is really not a big deal compared to using normal appliances or heating or cooling your home, or driving a car,” Joshua You, the data analyst at Epoch who conducted the analysis, told TechCrunch.

    AI’s energy usage — and its environmental impact, broadly speaking — is the
    subject of contentious debate as AI companies look to rapidly expand their infrastructure footprints. Just last week, a group of over 100 organizations published an open letter calling on the AI industry and regulators to ensure that new AI data centers don’t deplete natural resources and force utilities to rely on nonrenewable sources of energy.

    You told TechCrunch his analysis was spurred by what he characterized as outdated previous research. You pointed out, for example, that the author of the report that arrived at the 3 watt-hours estimate assumed OpenAI used
    older, less-efficient chips to run its models. [...]

    https://techcrunch.com/2025/02/11/chatgpt-may-not-be-as-power-hungry-as-once-assumed/

    (Reading this article may be confusing, as it seems to attribute a number of statements to you, Dear Reader.  Shades of Who's On First.)

    ------------------------------

    Date: Thu, 13 Feb 2025 08:10:45 -0800
    From: Steve Bacher <sebmb1@verizon.net>
    Subject: Hollywood writers say AI is ripping off their work. They want
    studios to sue (LA Times)

    Several film and TV writers say they are horrified their scripts are being
    used by tech companies to train AI models without writers' permission. They
    are pressuring studios to take legal action.

    https://www.latimes.com/entertainment-arts/business/story/2025-02-12/hollywood-writers-say-ai-is-ripping-off-their-work-they-want-studios-to-sue

    ------------------------------

    Date: Sun, 9 Feb 2025 10:27:30 +0000
    From: Julian Bradfield <jcb@inf.ed.ac.uk>
    Subject: Re: UK slaps Technical Capacity Notice on Apple requiring Law
    Enforcement access to encrypted cloud data (RISKS-34.55)

    All reports, like this one, conflate two things. A technical capability
    notice does indeed require Apple to backdoor their security. However, it
    does not require them to allow the UK authorities to "retrieve all the
    content any Apple user worldwide has uploaded to the cloud". Each individual use of the backdoor is still subject to warrant. In short, the TCN requires Apple to make it possible for Apple to respond to a warrant.

    That's quite bad enough -- it doesn't help to exaggerate things by
    suggesting a massive free-for-all is proposed.

    ------------------------------

    Date: Sat, 28 Oct 2023 11:11:11 -0800
    From: RISKS-request@csl.sri.com
    Subject: Abridged info on RISKS (comp.risks)

    The ACM RISKS Forum is a MODERATED digest. Its Usenet manifestation is
    comp.risks, the feed for which is donated by panix.com as of June 2011.
    SUBSCRIPTIONS: The mailman Web interface can be used directly to
    subscribe and unsubscribe:
    http://mls.csl.sri.com/mailman/listinfo/risks

    SUBMISSIONS: to risks@CSL.sri.com with meaningful SUBJECT: line that
    includes the string `notsp'. Otherwise your message may not be read.
    *** This attention-string has never changed, but might if spammers use it.
    SPAM challenge-responses will not be honored. Instead, use an alternative
    address from which you never send mail where the address becomes public!
    The complete INFO file (submissions, default disclaimers, archive sites,
    copyright policy, etc.) has moved to the ftp.sri.com site:
    <risksinfo.html>.
    *** Contributors are assumed to have read te full info file for guidelines!

    OFFICIAL ARCHIVES: http://www.risks.org takes you to Lindsay Marshall's
    delightfully searchable html archive at newcastle:
    http://catless.ncl.ac.uk/Risks/VL.IS --> VoLume, ISsue.
    Also, ftp://ftp.sri.com/risks for the current volume/previous directories
    or ftp://ftp.sri.com/VL/risks-VL.IS for previous VoLume
    If none of those work for you, the most recent issue is always at
    http://www.csl.sri.com/users/risko/risks.txt, and index at /risks-34.00
    ALTERNATIVE ARCHIVES: http://seclists.org/risks/ (only since mid-2001)
    *** NOTE: If a cited URL fails, we do not try to update them. Try
    browsing on the keywords in the subject line or cited article leads.
    Apologies for what Office365 and SafeLinks may have done to URLs.
    Special Offer to Join ACM for readers of the ACM RISKS Forum:
    <http://www.acm.org/joinacm1>

    ------------------------------

    End of RISKS-FORUM Digest 34.56
    ************************

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)