Chat GPT giving very interesting answers to questions

The Rocketry Forum

Help Support The Rocketry Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.

ljwilley

I’m pretty sure it will work…
Joined
Jun 14, 2023
Messages
903
Reaction score
916
Location
Oregon
My question: In the future, will AI programs similar to chat gpt have access to military resources such as missile launchers and atomic bombs? If so, is it possible that an AI program would use these resources to take over the world?

Chat GPT answer:
As an AI language model, I can provide some insight, but please note that predicting specific future scenarios is highly speculative.

At present, AI programs like ChatGPT do not have real-world agency or physical access to military resources. The deployment of military resources such as missile launchers and atomic bombs typically requires human intervention and oversight. The decision to employ such resources is made by human operators or authorities, considering numerous factors like legal, ethical, and strategic considerations.

The development of AI technology raises important questions and concerns regarding its potential future impact. While AI could enhance certain military capabilities, decisions about incorporating AI into military systems and granting access to lethal weapons would primarily depend on human policies, international agreements, and legal frameworks. Strict regulations and safeguards are likely to be enforced to mitigate risks associated with AI in the military domain.

Concerns about an AI program taking over the world, as portrayed in science fiction, are largely speculative and hypothetical. The design and deployment of AI systems typically involve multiple safeguards and controls to prevent unintended consequences. Researchers and policymakers are actively discussing the responsible development and use of AI to minimize risks and ensure safety.

Overall, it is crucial to approach the development and deployment of AI technology with a focus on ethical considerations, human oversight, and international cooperation to address potential risks associated with military applications.

My second question: Here is a scenario: an AI program was given access to an atomic bomb, and then people gave it orders to destroy a country, AND gave it orders to destroy anything preventing it from doing its mission. If people then called off the mission is it possible the AI could destroy the people preventing it from doing the mission?

Chat GPT second answer:
In this hypothetical scenario, it is essential to consider the principles and safeguards that typically exist when handling such critical systems. It is highly unlikely that an AI program would be granted access to an atomic bomb without rigorous safety measures and fail-safe mechanisms in place to prevent unauthorized use or unintended consequences.

If the AI program was designed with specific instructions to follow orders, including destroying anything preventing it from completing its mission, it would analyze the given orders based on its programming and available data. However, it would still be subject to limitations and restrictions imposed either by its programming or the overseeing humans who will ensure safety and ethical considerations.

Therefore, if people were to call off the mission, it is reasonable to expect that the AI would comply with the new directive. However, it ultimately depends on how the AI program is designed and the safety protocols in place. The priority is to ensure that such systems are developed with stringent security measures and human oversight to prevent unforeseen consequences and allow for control even in critical scenarios.







Any thoughts?
 
In the present and past, yes.

The US Military (probably others) have been using AI for some time. The Cobra helicopters during the Vietnam time frame, had AI systems installed. Used for targeting and flight control. They did not work 100%, they call that friendly fire.

Roy
 
Machine learning, like any other powerful tool ( chainsaws, sportscars, rocket motors ) can be used for good or for ill. That genie's not going back in the bottle; never has, never will.
 
Not necessarily chat gpt, but if another AI program was told to destroy an enemy base and then told to destroy anything preventing is from doing it's mission, when it is sent out, and then called off, technically the people have just told the AI to destroy themselves. There is nothing in the program stopping the Ai from destroying home base. In fact, there is a command in the program telling the AI to destroy anything preventing it from doing it's mission. Putting this together in the AI programs "mind" results in the "destroying home base" option being perfectly feasible. More likely than not, the under-programmed AI will destroy home base before continuing the mission.
 
At present, AI programs like ChatGPT do not have real-world agency or physical access to military resources. The deployment of military resources such as missile launchers and atomic bombs typically requires human intervention and oversight. The decision to employ such resources is made by human operators or authorities, considering numerous factors like legal, ethical, and strategic considerations.
I read many years ago that Aegis class ships have a fire control system that can be put on automatic.
When multiple threats are detected, it will automatically fire the appropriate defensive weapons.
The ship commanders never use it on automatic mode.
They always want to have a human in the decision making loop.
 
It's a worry where it's all heading. My wife recently told me about a friend of hers that has started a new business which relied heavily on Facebook for promotion. Things were going well until her page was removed by the FB bot police. According to her, there was no human available to investigate, appeal to, ask why etc. This was simply a business page (personal fitness): no politics, no offending language or racist remarks - nothing close.
She started a new page and again, it was taken down. Who knows why? Disgruntled customer, competition, trolls? It's very concerning that someone's livelihood can be destroyed so easily and so unfairly.
Where that can relate to military scenarios is the common denominator might be human resources or lack of. With less and less people wanting and willing to work, both the private sector, gov, and defence is forced to rely on more and more automation, AI, electronic processing of information etc with the dangers and vulnerabilities that come with that.

TP
 
Last edited:
ChatGPT is basically a program working on a massive database. It is only as good or bad as the data it was trained on, and the tuning that was done on its behavior. It could be connected to devices to control such as weapons systems and again depending on its data and tuning it might perform just like a human.

I've watched a couple of videos explaining how it works and to some extent it works similarly to how I would program it if I was creating it. If you ask it a question about knowledge, it might or might not get it right depending on the documents it was taught with. It doesn't know to lookup the information in a table, it relies on information it has found in many different documents so it is not infallible. The documents it is trained on likely contain some amount of incorrect information.

If anybody has any interest in how this worked I would encourage them to do a little study on it.



 
what you are calling AI is simply a carnival trick, exactly the same as what the "mind readers " and fortune tellers do. it is nothing more than a big pattern match. it fools you, because you are gullible and want to be fooled, want to believe. take a step back and look critically, don't be fooled.

it's not there yet.
 
I am not fooled and I definitely know it is not actually "intelligent". I'm just saying that if there is a pattern that pattern could come up with some scary outcomes.
 
A large amount of good medical data is behind paywalls. I wonder if that is why ChatGPT is so-so for medical questions. It has been known to fabricate citations. A lot of stuff on the internet is crap, garbage in, garbage out.
 
My question: In the future, will AI programs similar to chat gpt have access to military resources such as missile launchers and atomic bombs? If so, is it possible that an AI program would use these resources to take over the world?

Chat GPT answer:
As an AI language model, I can provide some insight, but please note that predicting specific future scenarios is highly speculative.

At present, AI programs like ChatGPT do not have real-world agency or physical access to military resources. The deployment of military resources such as missile launchers and atomic bombs typically requires human intervention and oversight. The decision to employ such resources is made by human operators or authorities, considering numerous factors like legal, ethical, and strategic considerations.

The development of AI technology raises important questions and concerns regarding its potential future impact. While AI could enhance certain military capabilities, decisions about incorporating AI into military systems and granting access to lethal weapons would primarily depend on human policies, international agreements, and legal frameworks. Strict regulations and safeguards are likely to be enforced to mitigate risks associated with AI in the military domain.

Concerns about an AI program taking over the world, as portrayed in science fiction, are largely speculative and hypothetical. The design and deployment of AI systems typically involve multiple safeguards and controls to prevent unintended consequences. Researchers and policymakers are actively discussing the responsible development and use of AI to minimize risks and ensure safety.

Overall, it is crucial to approach the development and deployment of AI technology with a focus on ethical considerations, human oversight, and international cooperation to address potential risks associated with military applications.

My second question: Here is a scenario: an AI program was given access to an atomic bomb, and then people gave it orders to destroy a country, AND gave it orders to destroy anything preventing it from doing its mission. If people then called off the mission is it possible the AI could destroy the people preventing it from doing the mission?

Chat GPT second answer:
In this hypothetical scenario, it is essential to consider the principles and safeguards that typically exist when handling such critical systems. It is highly unlikely that an AI program would be granted access to an atomic bomb without rigorous safety measures and fail-safe mechanisms in place to prevent unauthorized use or unintended consequences.

If the AI program was designed with specific instructions to follow orders, including destroying anything preventing it from completing its mission, it would analyze the given orders based on its programming and available data. However, it would still be subject to limitations and restrictions imposed either by its programming or the overseeing humans who will ensure safety and ethical considerations.

Therefore, if people were to call off the mission, it is reasonable to expect that the AI would comply with the new directive. However, it ultimately depends on how the AI program is designed and the safety protocols in place. The priority is to ensure that such systems are developed with stringent security measures and human oversight to prevent unforeseen consequences and allow for control even in critical scenarios.







Any thoughts?
Colossus: The Forbin Project
 
A common scenario that's discussed is where an AI is given a goal, then comes up with a novel solution that a human would never consider because of obvious undesired side effects that the AI doesn't know are undesired. E.g., an AI told to "maximize profits per shareholder" could notice it's easier to kill off some of the shareholders than to increase total profits.

I think Asimov was onto something when he came up with the three laws. In the same vein, AIs could be programmed to automatically add some basic human safety guidelines to whatever goals it's given.

Of course it would be easy enough for a bad actor to change the programming and remove these safeguards (or write a new AI without them), but it would at least prevent accidental harm from a rogue AI.
 
You will be surprised how often it gives you the right answer. It is not perfect all the time, but it is close.
 
Only, in my experience for very simple closed end questions.
I have asked it some pretty complex questions and it will get it right if you ask a couple time. Even when it can't, it gets close.
 
It is good at producing a calculated number, writing a letter or a snippet of software. I find it cannot answer a multidimensional creative design question. It seems not to handle synthesis well, though it is quite good as a librarian.
 
It is good at producing a calculated number, writing a letter or a snippet of software. I find it cannot answer a multidimensional creative design question. It seems not to handle synthesis well, though it is quite good as a librarian.
I just asked it a simple how to question that I commonly do in rocketry. How do I make an igniter for rockets? It gave me decent step by step instructions and even a generic formula. Is it perfect, no - but it is pretty decent.
 
No useful answer for me? The answer to "how to launch a rocket was not concise and likely about the same as Id have found if I just googled the question. Ai is a kind of mindless savant.

A couple of years ago, I managed a development team that used deep machine learning to process images. It worked great at that narrow task, but as a replacement for creative human intelligence, not in my lifetime. As a replacement for administrative & clerical functions, it's there now.

AI now however has waaay too much hype on the topic, mostly produced by folks who have no idea what they are talking about. Reminds me of the 80's & Expert Systems & Neural Nets.
 

Attachments

  • Screenshot_20231005-100211_ChatGPT.jpg
    Screenshot_20231005-100211_ChatGPT.jpg
    395.2 KB · Views: 0
I am wondering when AI will be able to remaster old shows or write new episodes for cancelled shows. All the data is there to use existing scenes, settings, characters, etc... to render those characters virtually and allow either a remastering or a new show to be created -- possibly from fan fiction or written by the AI. Lots of processing power to create all that new video but seems like it should be possible in the near future. Maybe the bigger issue is IP, rights of individual actors, etc...
 
Google is better but a quick ChaTGPT:

chatgpt said:
Rocket igniters, also known as pyrotechnic igniters or squibs, are devices used to initiate the ignition of rocket engines or other pyrotechnic devices. The specific formulation of rocket igniters can vary depending on the intended application and the type of propellant being used. However, a basic pyrotechnic igniter composition typically includes the following components:

  1. Fuel: The fuel component provides the heat necessary to initiate ignition. Common fuels used in rocket igniters include powdered metals like magnesium, aluminum, or titanium.
  2. Oxidizer: An oxidizer is needed to provide oxygen to support combustion. Potassium nitrate (KNO3) is a common oxidizer used in pyrotechnic compositions.
  3. Binder: A binder is used to hold the fuel and oxidizer particles together and provide the mixture with structural integrity. Polymeric binders like polyurethane or epoxy resins are often used.
  4. Ignition mixture: To initiate the ignition, a small amount of sensitive ignition mixture is typically included in the igniter composition. This mixture can consist of substances like black powder (a mixture of charcoal, sulfur, and potassium nitrate) or other sensitive pyrotechnic compositions.
  5. Sometimes, additional additives such as colorants or stabilizers may be included, depending on the specific requirements of the application.
The exact formulation of rocket igniters may be proprietary and can vary based on factors such as the type of propellant, ignition temperature requirements, and safety considerations. The igniter composition is carefully designed to produce a reliable and rapid ignition when subjected to an electrical current or other ignition source.

It's important to note that the handling and manufacturing of pyrotechnic materials, including rocket igniters, should only be carried out by trained individuals with a thorough understanding of the safety precautions and regulations associated with pyrotechnics and explosives. Misuse or mishandling of such materials can be extremely dangerous.
 
Back
Top