Introduction
The advent of Large Language Models (LLMs) like ChatGPT over the last few years has revolutionized the use of AI. Previously, AI felt esoteric and inaccessible to the average person unless they had a degree in mathematics. These generalized models have shifted that perception, making interaction with complex systems as simple as asking questions.
However, we are still navigating how to interact with them effectively. Similar to the early days of the internet, we know this technology will bring significant changes, but we are still figuring out exactly how.
One notable trend in discussions about LLMs is the focus on the negative aspects — what these technologies might take away or replace. This concern is understandable. Until now, no technology has convincingly mimicked human behaviour to the extent that LLMs do, creating a sense of unease when we see them generate code, prose, or images from minimal input. However this is an illusion, if we think we can just give up our autonomy and expect the machine to do it for us we will be sorely disappointed.
The Call
Holly Herndon is a musician and artist who’s work I have followed for a long time, I’ve watched her over the years develop into a creative who is embracing the possibilities that AI can bring, and like all good artists she is using art to help guide the conversation towards how we can embrace and use AI to enhance our lives as oppose to fight it.
Her new work with artist Mat Dryhurst, The Call I think really describes how we should be approaching the use of AI and the potential it brings:
They view the development of present-day AI models as the latest in a series of coordination technologies that allow individuals to work and build collectively. For millennia, choral and group singing have served a similar purpose of creating meaning in social and civic life. Evolving through protocols like call and response, they have helped build spaces and structures for gathering, processing, and transmitting information
Taken from the booklet that accompanies The Call exhibition
The key concept here is call and response. AI feeds on data, both the data it is trained with and the data you give it through inputs, it isn’t a one way process. It is communication through a conversation.
Learning with LLM's
To give an example, I used ChatGPT and Microsoft Copilot to help me learn Elixir, an Erlang-based functional language with which I had little prior experience. I started by reading some documentation, but then I tried to use an LLM to tell me where I was wrong or how I could improve.
What I did not do is dump questions into the LLM and then rote copy the result without any understanding or critical thinking. Leaving my brain at the door and taking the code given to me wholesale is a recipe for disaster.
This experience taught me how to use the results to learn effectively. Occasionally, I reverted to my usual method of web searching to compare the LLM’s answers against search engine results. Using an LLM felt like having a knowledgeable colleague to ask questions. It worked best when I had specific issues, such as a piece of code that didn’t work or needed improvement. In these cases, the LLM could teach me something new and help me get unstuck.
It was also interesting when it got things wrong, there were a couple of times where it didn’t quite answer the question in the way I wanted, or presented code that was faulty, but I was still able to take bits of the code to fit what I was trying to do.
The key thing was the back and forth, I’ve done this what do you think? How about now when I have tweaked it? The nice thing was that because it was question based it made you think of it more like an exploration than a problem you had to solve up front.
When I compared it against my old approach of typing a question into Google however I noticed a key difference. When working with the LLM you ask it a question and you are presented with a single answer, doing the same thing with Google you’ll get back a selection of possible answers. I noted that when using the search engine I would jump back and forth between the results, absorb different opinions on the question and then piece together what I needed.
It’s struck me that it’s probably an unconscious skill I’ve developed from working for years with code and the internet but it’s something that feels different from working with the LLM.
One useful thing with an LLM though is that you have choice in how an answer is presented, if you want more than one opinion you can ask it to present you several different answers. But remember these are several answers from the same model, when you use a search engine you are comparing several different answers from a number of completely independent individuals.
However the one thing you can't do with a search engine is ask it to answer all your questions like a pirate.
Conclusion
So what is my takeaway from this? I see LLM’s as the ultimate rubber duck, an actual feedback loop rather than a plastic stand-in. I think we have a lot to gain from seeing LLM’s like this, used in this way it is not taking anything away from me, instead it is creating a reinforcing feedback loop with the back and forth pushing you to grow and expand your knowledge. Just don’t expect it to do all the work for you.
If you’re looking to explore AI in a way that enhances your business, we’d love to help. Whether it’s strategy, training, or implementation, our team at Advancing Analytics is here to guide you. Get in touch with us and let’s start the conversation.
Topics Covered :

Author
Adam Lammiman