ChatGPT and the CTI O-8000

The Rocketry Forum

Help Support The Rocketry Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.

FlyAwaySam

Member
Joined
Mar 23, 2023
Messages
24
Reaction score
15
Location
Texas
So I was playing around on ChatGPT interested in how fast you can get a Toyota Corolla while using the CTI O-8000. ChapGPT starts doing its thing and it starts to go down hill from here:

1727756682699.png

That impulse does not look right so then I wondered about its thrust data so:

1727756743822.png

At this point, I was starting to realize that for some reason, it was treating US units as SI and screwing up the equations (which gave that a 1500kg object will move a mere 6 m/s). So then I asked and got the response:
1727756839717.png


After this, I then asked the same question but with the Aerotech N1000 and it says its total impulse is roughly 3/4 of the CTI O-8000 which it is not so not only fooling itself with the units, it fooling itself over its own data.
I find this strange as it seems like it can calculate the volume of nitrous oxide at a given pressure and weight but not get information like thrust and impulse. Now that I have written that, I should go back and double check the nitrous results. Ah, hind sight.
 
ChatGPT cannot calculate anything. It is pulling from data where the numbers have been calculated before.

Here's the really simplistic description of how it works (I run a software company, I have done some AI work, but I'm design side, so I know enough to explain it, even mess around with setting up my own AI and train it, but that's about it)

Since the purpose behind ChatGPT was to give 'human-sounding responses to search queries,' and LLMs can't do analysis (they're language models!), it's not going to be able to give you an answer. It can only really dig up what's happened before.

Since part of the goal is to create variation to make it sound human, and because it's basically a Mega Magic 8-Ball made up of a billion other magic 8-balls, the variation sometimes picks random data from the data set. So if I say to an image LLM "make an image of a blue rocket," it will sometimes pick a different color, ignoring instructions, because it needs to be 'random.'

This is the simplest way to describe what is referred to as "hallucinations" in the data--getting things wrong.

The goal is to make something that SOUNDS turing complete, but reliability isn't important, which is why the head of google's AI said in an interview recently that "hallucinations" (not a great term, since LLMs aren't AI so it's not hallucinating, it's just generating false data).

Further, since it can't analyze things, and they tried to feed ChatGPT on reddit, if you ask certain questions that are clearly joke answers on reddit ("Is it safe to eat rocks?") it will say "yes" because ChatGPT can't determine veracity. This is why the common "how many Rs are in the word strawberry?" or "how many Ns are in the word mayonnaise?" tests have it repeatedly getting things wrong. ChatGPT isn't doing analysis. It's not even 'guessing.' A big sorting machine is regenerating prompts based on the text you fed into it.

It's a BIG sifter, sifting words. Big enough to make sentences sound realistic. It will tell you "2+2=4," not because it can calculate that, but because the alphanumeric character "4" is the most statistically likely number to show up in all the data it's fed on saying "what is 2+2?" It has no concept of numbers.

Think of it like someone who doesn't know a language being asked to copy data in that language. Like, I don't know Japanese, right? So if I'm copying a Japanese text, and I start to see a pattern, and someone asks me "if you see these three characters, what character would you likely see after that?" I might go "oh, I see THAT character!" but since I can't read or understand Japanese, I have no way of knowing if I'm right.

ChatGPT is like throwing darts at google search and hoping it gives you the right answer instead of making something up. It can only show you something someone else calculated.

Cute toy tho.

If you wanna see how silly it is, ask it to tell you a superhero origin story and look at how many times "with great power comes great responsibility" shows up. I tried this, even did supervillain origins, and it would always end up with the villain deciding that great power, great responsibility, all that.
 
ChatGPT cannot calculate anything. It is pulling from data where the numbers have been calculated before.

Here's the really simplistic description of how it works (I run a software company, I have done some AI work, but I'm design side, so I know enough to explain it, even mess around with setting up my own AI and train it, but that's about it)

Since the purpose behind ChatGPT was to give 'human-sounding responses to search queries,' and LLMs can't do analysis (they're language models!), it's not going to be able to give you an answer. It can only really dig up what's happened before.

Since part of the goal is to create variation to make it sound human, and because it's basically a Mega Magic 8-Ball made up of a billion other magic 8-balls, the variation sometimes picks random data from the data set. So if I say to an image LLM "make an image of a blue rocket," it will sometimes pick a different color, ignoring instructions, because it needs to be 'random.'

This is the simplest way to describe what is referred to as "hallucinations" in the data--getting things wrong.

The goal is to make something that SOUNDS turing complete, but reliability isn't important, which is why the head of google's AI said in an interview recently that "hallucinations" (not a great term, since LLMs aren't AI so it's not hallucinating, it's just generating false data).

Further, since it can't analyze things, and they tried to feed ChatGPT on reddit, if you ask certain questions that are clearly joke answers on reddit ("Is it safe to eat rocks?") it will say "yes" because ChatGPT can't determine veracity. This is why the common "how many Rs are in the word strawberry?" or "how many Ns are in the word mayonnaise?" tests have it repeatedly getting things wrong. ChatGPT isn't doing analysis. It's not even 'guessing.' A big sorting machine is regenerating prompts based on the text you fed into it.

It's a BIG sifter, sifting words. Big enough to make sentences sound realistic. It will tell you "2+2=4," not because it can calculate that, but because the alphanumeric character "4" is the most statistically likely number to show up in all the data it's fed on saying "what is 2+2?" It has no concept of numbers.

Think of it like someone who doesn't know a language being asked to copy data in that language. Like, I don't know Japanese, right? So if I'm copying a Japanese text, and I start to see a pattern, and someone asks me "if you see these three characters, what character would you likely see after that?" I might go "oh, I see THAT character!" but since I can't read or understand Japanese, I have no way of knowing if I'm right.

ChatGPT is like throwing darts at google search and hoping it gives you the right answer instead of making something up. It can only show you something someone else calculated.

Cute toy tho.

If you wanna see how silly it is, ask it to tell you a superhero origin story and look at how many times "with great power comes great responsibility" shows up. I tried this, even did supervillain origins, and it would always end up with the villain deciding that great power, great responsibility, all that
Yup, the superhero origin story had a variation of that line.
Interesting though. My favorite thing to do it too many fan stories as I have too many ideas but crap writing skills so it is good time killing entertainment.
I have a feeling with my nitrous calculations after reading this is that nitrous oxide is common enough that there may be some calculations out there doing what I was doing (Need x amount of nitrous oxide at y psi, what is the volume) but doing it manually won't hurt.
 
Ha Ha..... I got caught by it recently asking some Barrowman equation info.
It gave me a specific formula, didn't cite a reference. I asked it for a reference, it provided one. The formula was not there. A variation of it was out by a factor of 2.
Short version, it blatantly lied about the formula and any reference. More than once..........I eventually went away and found what I was looking for myself.....
 
Yup, the superhero origin story had a variation of that line.
Interesting though. My favorite thing to do it too many fan stories as I have too many ideas but crap writing skills so it is good time killing entertainment.
I have a feeling with my nitrous calculations after reading this is that nitrous oxide is common enough that there may be some calculations out there doing what I was doing (Need x amount of nitrous oxide at y psi, what is the volume) but doing it manually won't hurt.
Use SI units......
1Mol of any gas has a volume of 22.4Litres at Standard Temp and Pressure(look it up)
Find the molecular weight of N2O
use P1 x V1/T1(in Kelvin)=P2 x V2/T2 If there's no change in temperature P1 x V1=P2 x V2
Sure you can figure out the rest.
 
Last edited:
ChatGPT cannot calculate anything. It is pulling from data where the numbers have been calculated before.

Here's the really simplistic description of how it works (I run a software company, I have done some AI work, but I'm design side, so I know enough to explain it, even mess around with setting up my own AI and train it, but that's about it)

Since the purpose behind ChatGPT was to give 'human-sounding responses to search queries,' and LLMs can't do analysis (they're language models!), it's not going to be able to give you an answer. It can only really dig up what's happened before.

Since part of the goal is to create variation to make it sound human, and because it's basically a Mega Magic 8-Ball made up of a billion other magic 8-balls, the variation sometimes picks random data from the data set. So if I say to an image LLM "make an image of a blue rocket," it will sometimes pick a different color, ignoring instructions, because it needs to be 'random.'

This is the simplest way to describe what is referred to as "hallucinations" in the data--getting things wrong.

The goal is to make something that SOUNDS turing complete, but reliability isn't important, which is why the head of google's AI said in an interview recently that "hallucinations" (not a great term, since LLMs aren't AI so it's not hallucinating, it's just generating false data).

Further, since it can't analyze things, and they tried to feed ChatGPT on reddit, if you ask certain questions that are clearly joke answers on reddit ("Is it safe to eat rocks?") it will say "yes" because ChatGPT can't determine veracity. This is why the common "how many Rs are in the word strawberry?" or "how many Ns are in the word mayonnaise?" tests have it repeatedly getting things wrong. ChatGPT isn't doing analysis. It's not even 'guessing.' A big sorting machine is regenerating prompts based on the text you fed into it.

It's a BIG sifter, sifting words. Big enough to make sentences sound realistic. It will tell you "2+2=4," not because it can calculate that, but because the alphanumeric character "4" is the most statistically likely number to show up in all the data it's fed on saying "what is 2+2?" It has no concept of numbers.

Think of it like someone who doesn't know a language being asked to copy data in that language. Like, I don't know Japanese, right? So if I'm copying a Japanese text, and I start to see a pattern, and someone asks me "if you see these three characters, what character would you likely see after that?" I might go "oh, I see THAT character!" but since I can't read or understand Japanese, I have no way of knowing if I'm right.

ChatGPT is like throwing darts at google search and hoping it gives you the right answer instead of making something up. It can only show you something someone else calculated.

Cute toy tho.

If you wanna see how silly it is, ask it to tell you a superhero origin story and look at how many times "with great power comes great responsibility" shows up. I tried this, even did supervillain origins, and it would always end up with the villain deciding that great power, great responsibility, all that.
Thanks for this explanation. Certainly makes the results you get clearer. Which Chat Bot did you use to generate it? :)
 
ChatGPT cannot calculate anything. It is pulling from data where the numbers have been calculated before.

[.....]

If you wanna see how silly it is, ask it to tell you a superhero origin story and look at how many times "with great power comes great responsibility" shows up. I tried this, even did supervillain origins, and it would always end up with the villain deciding that great power, great responsibility, all that.
reminds me of cleverbot in the early 2000s... it was often responding wrongly, sometimes inappropriately because it used data from other users who often abused it. AI has come a long way since then, but it's still a bit too primitive to be a dependable source of knowledge (in my opinion anyway).
 
There are many levels of intelligent systems. AI is the current buzzphrase for marketing to the masses. ChatGPT is like a tricycle clown car built from junkyard parts.
Wholeheartedly agreed... I think (or hope) if they ever completely paywall AI, it will probably suffer except for the businesses that still pay for it... and eventually those might find it not worth paying for if they're not profiting off them enough.

Day one of my AI class in college (like 8 years ago now), my professor was like "AI is a fad" and "it comes and goes every few decades".
 
Machine Learning is real. Big Autocorrect is ( my personal opinion ) a boring derivative sidequest.

Interesting uses I've seen:
  • Feeding all the rule books for your favorite pen and paper RPG to train a bot to answer rules questions
  • Using JWST measurements in combination with data from past instruments to refine error bars from old observations
  • AlphaFold doing the computational chemistry thing, helping us decide where to investigate further
  • Taking scan data from Herculaneum scrolls, unrolling them virtually, running character / word / phrase recognition
  • Training a bot on all the whitepapers in a specific sub-discipline; having it find where we disagree with ourselves; having it suggest areas we may not have sufficiently explored
 
Machine Learning is real. Big Autocorrect is ( my personal opinion ) a boring derivative sidequest.

Interesting uses I've seen:
  • Feeding all the rule books for your favorite pen and paper RPG to train a bot to answer rules questions
  • Using JWST measurements in combination with data from past instruments to refine error bars from old observations
  • AlphaFold doing the computational chemistry thing, helping us decide where to investigate further
  • Taking scan data from Herculaneum scrolls, unrolling them virtually, running character / word / phrase recognition
  • Training a bot on all the whitepapers in a specific sub-discipline; having it find where we disagree with ourselves; having it suggest areas we may not have sufficiently explored
Asking the correct question has always been the problem. Knowing what is the correct question is even more difficult......
 
Machine Learning is real. Big Autocorrect is ( my personal opinion ) a boring derivative sidequest.

Interesting uses I've seen:
  • Feeding all the rule books for your favorite pen and paper RPG to train a bot to answer rules questions
  • Using JWST measurements in combination with data from past instruments to refine error bars from old observations
  • AlphaFold doing the computational chemistry thing, helping us decide where to investigate further
  • Taking scan data from Herculaneum scrolls, unrolling them virtually, running character / word / phrase recognition
  • Training a bot on all the whitepapers in a specific sub-discipline; having it find where we disagree with ourselves; having it suggest areas we may not have sufficiently explored
Excellent examples of software applications that do computations and data searches. The intelligence is in the system designers work.
 
Thanks for this explanation. Certainly makes the results you get clearer. Which Chat Bot did you use to generate it? :)
lol. I'm a professional writer and have literally spent the past week editing a bunch of writing work (technical and narrative) for my company's submission to our publisher this week. Lotta 2 AM days...

Thankfully, what I write still sounds human.

Excellent examples of software applications that do computations and data searches. The intelligence is in the system designers work.
Yup. I tried to limit my explanation to LLM's specifically for that reason. As artists on my team say, tools like Houdini are 'considered' AI by some, but they're what we consider 'automation.'
 
Last edited:
Back
Top