The subtle skill of using ChatGPT

Category

Articles

posted

April 2024

The subtle skill of using ChatGPT

How to best use the tool to boost productivity while keeping credibility

No doubt that by now, you will have tried out ChatGPT or other such programmes. From code reviews and ideation to investment advice and travel itineraries, ChatGPT can give you an answer to just about anything. And honestly, the results are quite impressive. But try asking the programme about something you know a lot about, and you’ll notice that the answer often isn’t necessarily wrong, but it’s not quite right either. It’s just a bit… off.

And while that’s not an issue when you know the subject and can spot it’s wrong, you have to wonder – what about when you ask Chat GPT about a topic you’re not that familiar with? How do you spot the inconsistencies or inaccuracies then?

How ChatGPT works

When using ChatGPT, it’s important to always keep in mind that the input of the model is mostly based on web pages, Wikipedia and other web texts. The way that a prompt is entered into ChatGPT is mirrored in its response, since it matches up the input given with the most likely answer based on contextual cues. For example, if the input is simply worded and minimal, then the output will be similarly so and provide little nuance or depth. It doesn’t have the creative interpretation a human would do if asked the same thing, so it provides the most mathematically correct response to the given input. 

Everything ChatGPT responds with is based on an algorithm trained on a predetermined set of information. The response is built up out of mathematically ‘correct’ or ‘logical’ output but lacks the nuance and diverse interpretation of a human mind. There will be times when a literal response is fine, but plenty more where more context and human intelligence are needed to actually take the answer as truth.

Here’s an example from our own experience. Whilst in the first stages of building our data warehouse, we used ChatGPT to increase our productivity in writing SQL queries to create views in BigQuery. Views are beneficial because they refresh the table view based on the underlying data each time they’re called. Querying a view that is based on another view refreshes the entire lineage of views. Chaining the queries in this way can help in creating a clearer flow of data. 

However, based on our prompts, ChatGPT generated “Create Table” queries. These queries create a new table that isn’t connected to the table it bases its result on. Also, a create table query can only be run once because you cannot create a new table if another table with the same name already exists. This makes a flow of data impossible without creating additional scheduled queries or stored procedures. 

Here our human knowledge played a crucial role in spotting the inaccuracy of ChatGPT. Though the output was technically correct from a code perspective, the concept behind the code wasn’t what we intended to produce.

How should we use ChatGPT then?

ChatGPT is a productivity booster. At NSL we use it to automate mundane tasks, search for inspiration and expand our knowledge on certain topics. But we always keep in mind that ChatGPT isn’t able to fact-check with the same context and complex nuance as humans can. 

Think of ChatGPT as an intern or junior, an efficient assistant that contributes to the furtherance of a need or effort. But ultimately, the person using ChatGPT is the one who needs to engage in complex thought both when giving the input and when receiving the answer.

A reminder to make ChatGPT work with you, rather than for you.

--

We’re passionate about all things data and are striving to make it more approachable, so you can expect regular updates on this topic. Get in touch if we can help guide your team to a better understanding of data, or if you’re ready to increase your revenue with the help of our experts.

By Bram Verleur
Head of Data at Not Selling Liquid