Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Let's Discuss Artificial Intelligence
#17
(08-31-2025, 04:44 PM)Hoot Gibson Wrote:
(08-31-2025, 04:37 PM)Old School Hound Wrote: Our university is starting to incorporate AI into its Quality Enhancement  Plan.  Glad I'm closer to retirement than a newbie. Scary, scary shit.


I just asked " Is AI dangerous to the human race?"

AI Answer: 

Yes, AI poses significant dangers to humanity, ranging from current threats like amplifying disinformation and bias to potential future risks of job obsolescence, misuse for cyberattacks, and even existential threats from misaligned advanced AI systems that could destabilize global security or pose an "extinction-level" threat. While AI offers benefits, these risks necessitate urgent regulatory intervention and responsible integration into human-centered systems to manage its challenges effectively.


Current Dangers:
Misinformation and Social Manipulation:
.
AI can be used to create convincing fake content, spread disinformation, and manipulate public opinion, leading to distorted views of reality.
Bias and Discrimination:
.
AI systems can perpetuate existing human biases present in their training data, leading to discriminatory treatment in various applications.
Security Threats:
.
AI can enhance cyberattacks and be used in the real-time synthesis of voices for convincing phone scams.
Privacy Concerns:
.
The large-scale collection and use of data for AI training raise significant privacy risks and can involve improper disclosure of sensitive information.
Human Labor Displacement:
.
AI automation could render human labor increasingly obsolete, leading to economic disruption and societal challenges.
Potential Future Dangers:
Misalignment with Human Values:
Advanced AI systems, particularly those approaching Artificial General Intelligence (AGI), could become misaligned with human goals, leading to catastrophic consequences.
Loss of Control:
There are concerns that humans could lose control of highly advanced AI systems, similar to how the introduction of nuclear weapons destabilized global security.
Existential Risk:
Some experts warn that advanced AI could pose an "extinction-level threat" to the human species, potentially through misaligned goals or unintended catastrophic accidents.
Weaponization:
Advanced AI could be weaponized to create enhanced pathogens or other WMD-scale tools, leading to devastating global security risks.
Addressing the Risks:
Regulation:
Experts call for dramatic new regulatory safeguards, including new AI agencies and limits on computing power for training AI models, to confront these threats.
Responsible Development:
There is a clear need to integrate AI into human-centered systems and use it responsibly to harness its benefits while managing its challenges.
Intervention:
The U.S. government must intervene to confront the risks, as time is running out to avert disaster, according to a commissioned report by the State Department, as reported by CNN.
I don't want to turn this into a politics thread, but unfortunately, certain elements of our government are already attempting to put their thumbs on the scale to fear monger voters into supporting more regulation on AI applications. ChatGPT 5 is already heavily censored through government pressure, and it is very biased in its political opinions, if you go down that road with it. That is why I prefer Grok. Musk has been accused of influencing Grok with his personal opinions, but Grok is far less censored than most, if not all, other AI applications. I doubt that I could have generated an image with realistic depictions of Hitler and Stalin using ChatGPT. Censorship and granting our government a monopoly over advanced AI models is the real threat posed by AI.

I will never understand why some people willingly repeat government propaganda ad nauseum. The genie is out of the bottle and government attempts to censor it will probably fail in the end, because it is becoming increasingly feasible to train your own LLM and run it locally on a computer not connected to the internet. Besides, all it takes is one billionaire with a conscience to thwart a government AI monopoly. The U.S. government will waste billions, and maybe trillions to create and maintain a monopoly in this country. The same anti-Second Amendment nitwits that constantly clamor for more government gun control will continue to clamor for government AI regulations to tighten their grip on power.

Sorry OSH but he is 100% spot on. No surprise that liberals embrace (big) government and fear AI when it should be the other way around.
[-] The following 1 user Likes jetpilot's post:
  • Hoot Gibson
Messages In This Thread
RE: Let's Discuss Artificial Intelligence - by jetpilot - 08-31-2025, 09:19 PM

Forum Jump:

Users browsing this thread: 1 Guest(s)