Information seeking and AI
AI tools are more likely than not here to stay. As with previous attempts at making information more easily accessible (Wikipedia, Google, etc.), it is our job to find ways with which we can make these tools work for us and not simply shun them. My personal opinion.
I believe that the widespread use of tools, like OpenAI’s ChatGPT, is in no small part due to people falling into different categories of information-seekers, one of which could be described through Zipf’s principle of least effort. Zipf hypothesized that people, animals, and machines will naturally choose the path of least resistance when searching for information. This coincides with the idea that once a tool is more convenient to use, information accuracy be damned, it will be used more.
The current pathways to information are broadly varied. I have talked to people who tend to converge on a platform like Instagram, YouTube, or TikTok for specific searches while using Google, friends, or books for others. For example, a person might use Instagram to search for food recipes but use Google for research regarding their thesis. My take on this is that ChatGPT is a “good enough” source of the former type of casual information.
There is more to say on this. As AI use increases, our trust in the system can increase (but trust is not the same as trustworthiness!) which could encourage people to use it even more.
Onto my point, since we can be relatively sure of the use of tools like ChatGPT1 we must ensure that we discuss how people can use these tools to find accurate and helpful information and not just information. The same way as we must teach students how to properly verify Wikipedia’s sources on a page and use those for research, or how to compare multiple results from a search engine by reading past just what the headlines tell us, we must teach people to be critical of information AI provides them and how to use it to further their understanding of topics not complete it.
I am no expert, I am not in the domain of AI, this was mostly just semi-educated ramble about what I think is right for me. I use AI on an almost daily basis, currently mostly local through the use of Jan.ai running a few models depending on my needs. I use a tiny model mostly for text manipulation, paraphrasing or summarizing information I find or asking it to split dense information up into smaller chunks. Alternatively, I use a larger model for automating certain boring tasks of programming, sometimes refactoring, sometimes scaffolding according to my needs.
In order to make AI do the boring parts of our lives, we need to find ways in which it can do them. LLMs are powerful tools, corporations will always be there to exploit profit, but individuals can try and exploit purpose and usefulness instead.