Fun with ChatGPT: Putting vast amounts of computing resources to work on important model rocketry tasks

The Rocketry Forum

Help Support The Rocketry Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
Yet, there's a reason that if you take a U.S. History class in college or high school, your syllabus will probably stop covering topics up to the year 2000 or so.

ha! in my day, we never covered anything after the Marshall Plan! no Korea, Vietnam...
 
let me guess, you are on the alert about "bias" because you watch fox news? yeah, that's a healthy reaction.

have you read any of the chatgpt answers? it's obsequious trying to present several "sides".

but yeah, GIGO. And you'll never know, because it can not explain how it arrived at any answer. that is, what were the inputs and weights. it is inscrutable.

Sorry dude, don't watch Fox. Don't watch any news at all, because they are all untrustworthy. Try again.

I was just tossing out the question about bias to see who dog-piled on it. I have little faith that said AIs won't be fed politically slanted information, and since most tech companies tend to be run by serious lefties... its a given that there will be biased AIs (GIGO, you know).
 
ha! in my day, we never covered anything after the Marshall Plan! no Korea, Vietnam...
My U.S. History class stopped covering topics after the Vietnam War, or maybe we stopped before that war started. I asked the teacher why, and he said anything we read or discuss that recent can be considered "journalism" and not history.
 
Checking that my own answers here make sense:

Input:
"Do you operate similarly to image recognition by artificial intelligence?"

Output:
"In some ways, yes, my operation can be compared to image recognition by artificial intelligence. Like image recognition systems, I am based on deep learning algorithms that are designed to learn patterns and make predictions based on large sets of training data.

However, there are also some key differences between image recognition and natural language processing (NLP), which is the field of AI that I specialize in. One of the biggest differences is that language is a much more complex and nuanced form of communication than images, which means that NLP models like myself need to be able to understand things like context, syntax, and semantics in order to interpret language correctly.

Another difference is that language is often less structured and more ambiguous than images, which can make it more challenging for NLP models to accurately parse and interpret text. Nonetheless, NLP and image recognition systems are both examples of how AI can be used to analyze and make sense of large amounts of data, and there is often overlap between the techniques used in these two fields."
 
Last edited:
I understand it like this:

Human beings have provided it with enormous amounts of selected data ("training data"). Our input makes it decide what information to select (statistically, as boatgeek says) and how to organize it as an output. If new training data is fed to it (by the programmers), two similar inputs, provided at different times, could generate two different outputs, the later one considering the newer data. I don't think the outputs are fed back to it.


I don't think it uses search engines. People (programmers) have already fed it with training data. The higher and better quality is the training data, the more accurate the response. I think. I'm not sure how often new training data is provided to improve the model.

M'eh ... why don't I just ask it:

Input:
"Do you require an internet search to provide an output?

Output:
As an AI language model, I do not always require an internet search to provide an output. When you ask me a question or input, I analyze the text and use my knowledge and understanding of the language to generate a response.

However, there may be times when I need to access external sources of information to provide a more accurate or complete response. In those cases, I may use the internet to search for relevant information to include in my answer.

For example, if you ask me a factual question like "What is the capital of France?", I can provide the answer based on my pre-existing knowledge. However, if you ask me a more complex question like "What are the best restaurants in Paris?", I may need to search the internet to provide a more informed answer.

In summary, while I can provide an output without an internet search, there may be cases where I need to access external information to provide the most accurate and complete response possible."
Does AI actually “think” or does it just “gather” data that is already on the web? Is it just a virtual sponge…?
 
Does AI actually “think” or does it just “gather” data that is already on the web? Is it just a virtual sponge…?

In the simplest terms: A programmer writes code for a computer and feeds it some data to complete the code. Then, users like us feed a question to the computer, and the computer produces an answer (computer execution or processing is essentially a calculation, made with many tiny electrical voltages and currents on chips. The word "thinking" is usually reserved for humans, and the word "processing" for computers. I purposely confuse the two for fun but not everyone does.).

Particularities of AI:
Many developpers (teams) are needed because the code is very complex. The processing power is extreme so many processors are required (and they can be at different locations on the globe), and the amount of data is enormous.

Particularities of the code (algorithms): lots of math. Fundamentally, it's all additions and multiplications, but millions or billions of them, organized in a special way. Organizing how and in what order the calculations are made distinguishes one AI system from another. Interestingly, the organization is inspired by biology.

Particularitities of the data: the more the better. While AI algorithms have been known for decades, only since the internet has there been enough data available to make AI powerful enough for things like ChatGPT.
 
In the simplest terms: A programmer writes code for a computer and feeds it some data to complete the code. Then, users like us feed a question to the computer, and the computer produces an answer (computer execution or processing is essentially a calculation, made with many tiny electrical voltages and currents on chips. The word "thinking" is usually reserved for humans, and the word "processing" for computers. I purposely confuse the two for fun but not everyone does.).

Particularities of AI:
Many developpers (teams) are needed because the code is very complex. The processing power is extreme so many processors are required (and they can be at different locations on the globe), and the amount of data is enormous.

Particularities of the code (algorithms): lots of math. Fundamentally, it's all additions and multiplications, but millions or billions of them, organized in a special way. Organizing how and in what order the calculations are made distinguishes one AI system from another. Interestingly, the organization is inspired by biology.

Particularitities of the data: the more the better. While AI algorithms have been known for decades, only since the internet has there been enough data available to make AI powerful enough for things like ChatGPT.
The difference between AI and human, between thinking and processing is getting smaller and smaller...

https://mpost.io/chatgpt-passes-the-turing-test/
 
Does AI actually “think” or does it just “gather” data that is already on the web? Is it just a virtual sponge…?
Answer: it depends on the definition. My understanding is that the AI was trained with a vast amount of data that's already out on the web somewhere. Its programmes are presumably still feeding it data, so it nominally gathers data off the web. However, it doesn't appear to google (or bing or duckduckgo) the answer and turn that into text. It looks in its database to see how the words in the query usually associate with other words. Is that thinking? I dunno.

What I have seen from the output is that the AI is really verbose and tends to give really squishy answers (many people think X, but many others think Y, so there's a lot of controversy). It's also frequently wrong about technical information that might not be widely shared on the web, like the nose cone question posted upthread. Of course, mainstream TV news anchors are often all of those things, and they are widely believed to be thinking humans. :D
 
I keep getting a network error but waiting to see how it does with:

"Where is the Center of Pressure of a model rocket that has a body tube that is 6.6cm in diameter and 45.7cm long with an ogive nosecone that is 19.8cm long and 4 fins that are 5.1cm tall and 10.2cm long at root, 10.2cm long at tip, and have a 5.1cm sweep length."

View attachment 570331

Finally got a complete response -- I guess all us rocketry folks were keeping Chat GPT busy yesterday (or my internet was being killed by my kids watching youtube). Note Chat GPT reported distance from base so converting OR result this would be 22.9cm from base (vs. Chat GPT's answer of 20.57). Maybe Chat GPT is secretly using the base drag hack to move CP back...

My query:

"Where is the Center of Pressure of a model rocket that has a body tube that is 6.6cm in diameter and 45.7cm long with an ogive nosecone that is 19.8cm long and 4 fins that are 5.1cm tall and 10.2cm long at root, 10.2cm long at tip, and have a 5.1cm sweep length."

Chat GPT Answer (and thought process / math calculations):

To determine the center of pressure (CP) of a rocket, we need to calculate the total surface area of all components of the rocket and their corresponding centers of pressure, and then use those values to find the net center of pressure.

In this case, we have a rocket with four fins and an ogive nosecone, as well as a body tube. We'll calculate the CP of each component individually:

  1. Body tube: The CP of a cylinder is located at its geometric center, or halfway between the ends. So for a body tube that is 45.7cm long, the CP will be at 22.85cm from the base.
  2. Nosecone: The CP of an ogive nosecone can be approximated as 2/3 of the way from the tip of the nosecone to its base. So for a nosecone that is 19.8cm long, the CP will be at 6.6cm from the base.
  3. Fins: The CP of a fin depends on its shape, but for a rectangular fin like the one in this rocket, the CP is located 1/4 of the way from the leading edge to the trailing edge. Since the fin has a sweep length of 5.1cm, we can calculate the average chord length as (10.2cm + 5.1cm) / 2 = 7.65cm. Then, the CP of the fin will be located at 1.91cm from the leading edge.
Now we can calculate the total surface area of each component and their corresponding moments, using the distances we just calculated:

  • Body tube: surface area = pi x radius^2 x length = 3.14 x (6.6/2)^2 x 45.7 = 7975.16 cm^2, moment = 7975.16 x 22.85 = 182,173.23 cm^3
  • Nosecone: surface area = pi x radius^2 x (2/3) x length = 3.14 x (6.6/2)^2 x (2/3) x 19.8 = 906.78 cm^2, moment = 906.78 x 6.6 = 5,994.35 cm^3
  • Fins: surface area = chord length x height = 7.65 x 5.1 = 39.02 cm^2 per fin, moment = 39.02 x 1.91 = 74.48 cm^3 per fin x 4 fins = 297.92 cm^3
Now we can find the net center of pressure by adding up the moments of each component and dividing by the total surface area:

Net moment = 182,173.23 + 5,994.35 + 297.92 = 188,465.5 cm^3 Total surface area = 7975.16 + 906.78 + 39.02 x 4 = 9163.02 cm^2 Center of pressure = Net moment / Total surface area = 188,465.5 / 9163.02 = 20.57cm from the base

Therefore, the center of pressure of this model rocket is located approximately 20.57cm from the base.
 
Particularitities of the data: the more the better. While AI algorithms have been known for decades, only since the internet has there been enough data available to make AI powerful enough for things like ChatGPT.

And now, we're in the age of "BIG Data". Machines are recording data about us far faster than the entire human race every day! My phone counts every step I make. Fitbits record every heart beat... It's only gonna get worse. AI models will add huge amounts of data about you - yes YOU!

Privacy is pretty much long gone (cameras, Alexa, Tik...) And Predictive modeling is getting crazy smart!
 
most tech companies tend to be run by serious lefties...

oh yeah, lefties like TJ Rogers, Peter Thiel, Sam Altman, fing Elon? I could name 3 dozen more...

all those right wing reactionary libertarian "dont need not reg gu la shun" who when crying to the gubbermint for a bailout when the banks failed? those lefties?

yeah, let's talk about this. not.
 
Finally got a complete response -- I guess all us rocketry folks were keeping Chat GPT busy yesterday (or my internet was being killed by my kids watching youtube). Note Chat GPT reported distance from base so converting OR result this would be 22.9cm from base (vs. Chat GPT's answer of 20.57). Maybe Chat GPT is secretly using the base drag hack to move CP back...

My query:

"Where is the Center of Pressure of a model rocket that has a body tube that is 6.6cm in diameter and 45.7cm long with an ogive nosecone that is 19.8cm long and 4 fins that are 5.1cm tall and 10.2cm long at root, 10.2cm long at tip, and have a 5.1cm sweep length."

Chat GPT Answer (and thought process / math calculations):

To determine the center of pressure (CP) of a rocket, we need to calculate the total surface area of all components of the rocket and their corresponding centers of pressure, and then use those values to find the net center of pressure.

In this case, we have a rocket with four fins and an ogive nosecone, as well as a body tube. We'll calculate the CP of each component individually:

  1. Body tube: The CP of a cylinder is located at its geometric center, or halfway between the ends. So for a body tube that is 45.7cm long, the CP will be at 22.85cm from the base.
  2. Nosecone: The CP of an ogive nosecone can be approximated as 2/3 of the way from the tip of the nosecone to its base. So for a nosecone that is 19.8cm long, the CP will be at 6.6cm from the base.
  3. Fins: The CP of a fin depends on its shape, but for a rectangular fin like the one in this rocket, the CP is located 1/4 of the way from the leading edge to the trailing edge. Since the fin has a sweep length of 5.1cm, we can calculate the average chord length as (10.2cm + 5.1cm) / 2 = 7.65cm. Then, the CP of the fin will be located at 1.91cm from the leading edge.
Now we can calculate the total surface area of each component and their corresponding moments, using the distances we just calculated:

  • Body tube: surface area = pi x radius^2 x length = 3.14 x (6.6/2)^2 x 45.7 = 7975.16 cm^2, moment = 7975.16 x 22.85 = 182,173.23 cm^3
  • Nosecone: surface area = pi x radius^2 x (2/3) x length = 3.14 x (6.6/2)^2 x (2/3) x 19.8 = 906.78 cm^2, moment = 906.78 x 6.6 = 5,994.35 cm^3
  • Fins: surface area = chord length x height = 7.65 x 5.1 = 39.02 cm^2 per fin, moment = 39.02 x 1.91 = 74.48 cm^3 per fin x 4 fins = 297.92 cm^3
Now we can find the net center of pressure by adding up the moments of each component and dividing by the total surface area:

Net moment = 182,173.23 + 5,994.35 + 297.92 = 188,465.5 cm^3 Total surface area = 7975.16 + 906.78 + 39.02 x 4 = 9163.02 cm^2 Center of pressure = Net moment / Total surface area = 188,465.5 / 9163.02 = 20.57cm from the base

Therefore, the center of pressure of this model rocket is located approximately 20.57cm from the base.

I am completely blown away by this. I fundamentally didn't think I would ever see something like this during my lifetime. . .I thought it was total science fiction. I am actually scared - not kidding.

Sandy.
 
I challenged my teen-aged son to come up with an HTML page that calculated thrust-to-weight ratios based on available motors and to use metric. I thought it would take a couple of days but in three hours, he came back with this:



He confessed to using ChatGPT and had run out of queries a couple of times or it would have been faster. It works pretty well. We sat down and calculated the TWR by hand and it seems accurate, based on the thrust numbers given. Ours were a little off from the app because I used values from https://www.thrustcurve.org/. I don't know where ChatGPT got its numbers from.
 


Amazing what chatgpt can do.

Chatgpt is an auto regressive word chunk predictor. It runs this in a loop predicting the next word chunk. It uses a set of 32 thousand word chunks as an alphabet.

It's easier to visualize using letters. See the attached image.
 

Attachments

  • Screenshot_20230323_235317_Opera~2.jpg
    Screenshot_20230323_235317_Opera~2.jpg
    107.1 KB · Views: 0
Last edited:
The difference between AI and human, between thinking and processing is getting smaller and smaller...

https://mpost.io/chatgpt-passes-the-turing-test/
I asked ChatGPT if it passed the test and it said this:

Input: "Does chatgpt pass the turing test?"

Output:
"The Turing test is a subjective measure of a machine's ability to exhibit intelligent behavior that is indistinguishable from that of a human. While ChatGPT is a highly advanced language model that can simulate human-like conversations, it is not capable of passing the Turing test in its current form.

The Turing test involves a human evaluator interacting with both a human and a machine through a text-based interface, without knowing which one is the machine. If the evaluator is unable to reliably distinguish between the human and the machine based on their responses, then the machine is said to have passed the Turing test.

However, ChatGPT's responses are generated based on statistical patterns learned from vast amounts of text data, and it does not have true consciousness or understanding of the world in the way that humans do. While it can provide useful information and engage in interesting conversations, it is still ultimately a machine and not a human."

Answer: it depends on the definition.
"Thinking" and "processing" are concepts based on a limited number of languages by one mammal species. A common reality underlying both concepts is that they are electrical activity, which doesn't follow human semantics and definitions. So at some level (somewhere at the molecular, crystalline, or atomic level), I'd say they cannot be distinguished.

My understanding is that the AI was trained with a vast amount of data that's already out on the web somewhere. Its programmes are presumably still feeding it data, so it nominally gathers data off the web. However, it doesn't appear to google (or bing or duckduckgo) the answer and turn that into text. It looks in its database to see how the words in the query usually associate with other words. Is that thinking? I dunno.
I asked:

Input: "Do you require an internet search to provide an output?"

Ouput:
"As an AI language model, I do not always require an internet search to provide an output. When you ask me a question or input, I analyze the text and use my knowledge and understanding of the language to generate a response.

However, there may be times when I need to access external sources of information to provide a more accurate or complete response. In those cases, I may use the internet to search for relevant information to include in my answer.

For example, if you ask me a factual question like "What is the capital of France?", I can provide the answer based on my pre-existing knowledge. However, if you ask me a more complex question like "What are the best restaurants in Paris?", I may need to search the internet to provide a more informed answer.

In summary, while I can provide an output without an internet search, there may be cases where I need to access external information to provide the most accurate and complete response possible."

GPT4 is supposedly much better, but I don’t know when the public will be able to get their hands on it.
Google is also developping "Bard", which is also better. Maybe OpenAI wants GPT4 to be as good as Bard, first. I think they give us access so they can gather data on how it's used and there will always be some verrsion we don't have access to (freely).

And now, we're in the age of "BIG Data". Machines are recording data about us far faster than the entire human race every day! My phone counts every step I make. Fitbits record every heart beat... It's only gonna get worse. AI models will add huge amounts of data about you - yes YOU!

Privacy is pretty much long gone (cameras, Alexa, Tik...) And Predictive modeling is getting crazy smart!
"Big Data" is exactly what was first required to make AI as we know it today possible. Our data is collected but individual privacy is kept. Banks have been doing this for centuries (and email servers for a few decades).

I am completely blown away by this. I fundamentally didn't think I would ever see something like this during my lifetime. . .I thought it was total science fiction. I am actually scared - not kidding.
Many people, including CEOs have expressed concerns. But I say unchartered territories have been around forever, so m'eh. New things aren't new. 🤨 🤔 o_O 🙃

(Things that are new, are new, but the advent of things that are new, is not new.)
 
Last edited:
Google is also developping "Bard", which is also better. Maybe OpenAI wants GPT4 to be as good as Bard, first. I think they give us access so they can gather data on how it's used and there will always be some verrsion we don't have access to (freely).
I haven't tried Bard yet but what I have *heard* is that it is not nearly as good as GPT4. For what it's worth.
 
Oh ... um ... I uh ... just passing by ... 😁

Input:
"How does ChatGPT compare with Eliza arificial intelligence?"

Output:
"ChatGPT and Eliza are both language models designed to engage in conversation with humans. However, there are several key differences between the two.

Eliza was one of the earliest chatbots created, back in the 1960s, and was designed to simulate a psychotherapist. It used a set of predefined rules to respond to user input, and essentially just mirrored the user's input back to them in the form of a question. While it was groundbreaking for its time, Eliza's responses were quite limited and it could only respond to a narrow range of inputs.

In contrast, ChatGPT is a much more advanced language model, based on the GPT architecture, and has been trained on a vast corpus of text data. This allows it to generate much more sophisticated responses, and it is capable of engaging in much more complex conversations. ChatGPT can also learn and adapt over time as it is exposed to more data and user interactions, whereas Eliza's responses are fixed.

Overall, while Eliza was an important early milestone in the development of chatbots, ChatGPT represents a significant leap forward in terms of the capabilities and sophistication of conversational AI."
 
I haven't tried Bard yet but what I have *heard* is that it is not nearly as good as GPT4. For what it's worth.
I have and it's months behind GPT3. Our company is currently using GPT4 in production to read vendor invoices to get data from them. It is uncanny how accurate it is and it's more accurate than GPT3. Downright scary.
 
What does ChatGPT think about its ancient ancestor Eliza?
ha!! that's the first thing I tried after setting up a chatGPT account.

it wasn't funny. not even degenerate.

so then I tried feeding Zippy the Pinhead quotes in to chatGPT. that was always good for a laugh with Eliza. again, nothing.

two thumbs down for chatGPT.
 
ha!! that's the first thing I tried after setting up a chatGPT account.

it wasn't funny. not even degenerate.

so then I tried feeding Zippy the Pinhead quotes in to chatGPT. that was always good for a laugh with Eliza. again, nothing.

two thumbs down for chatGPT.

Enjoy! https://www.cyberpsych.org/eliza/
 
Back
Top