Hello, my dear friend!
As you may already be aware, I am fascinated by innovations and technologies, as well as the infinite human potential through creative and deep thinking.
At the same time, I can’t help but see two paths unfolding.
As someone who has been working with cutting-edge tools like machine learning, EEG brain research, fMRI, Eye tracking. Data science, AI and analytical tools, we are now at an historic turning point.
I love what AI can do, but I’m concerned about the future.
Furthermore, I am concerned about the upcoming trend of more AI centralization, pay-walls, shady privacy policies, and the introduction of censorship and political biases into these tools.
Censorship for Harm Reduction
I understand the need for some form of harm reduction, but I would rather suggest we do more at the front door to prevent it.
Things like an e-mail address, working phone number and name are already included in the onboarding process.
By altering this process slightly and creating a seven-bullet point set of rules which people will HAVE to read and accept, we will ultimately reduce the number of users trying to abuse these systems.
If people can be held accountable, most will react in a more responsible way.
Looking ahead at 2023 and it’s implications for an AI-driven world
In the coming year, it has become clear that more and more new AI tools will be released.
They will certainly have the capacity to replace talented human copywriters, coders, and many others. Denying this is simply wishful thinking and shows a lack of understanding of how rapidly these technologies can grow.
In early 2013 and 2014, I was frustrated by the lack of technological maturity in both the models and the computing power when working on data science and AI-driven projects.
Back then, I had this idealistic and naive view that access to this technology would mean an open and democratic process—what a utopia.
The dualistic nature of these much more powerful tools is now apparent to me.
It’s not the tool. It’s how you use it!
Let me be clear, it is NOT the technology I am concerned about, it is the way it is being shaped right now.
I liked the idea of creating AI tools that were free, open-source, and unbiased, like OpenAI started many years ago.
But, currently, I cannot help but notice that some companies are bought for billions of dollars by the wealthiest people on earth, like the recent acquisition of OpenAI by Microsoft.
They tried to make you think the company is still developing open systems and are ready to democratize access to AI systems. The current capped profit structure of OpenAI is just for show and will never hinder the ability for OpenAI to generate massive profits, after doing some simple research.
Do not get me wrong, I have been involved with their technology and similar projects for some years now. I have been behind the initiatives.
Since these Large Language Models use a lot of computing and data power, it was almost inevitable that big companies would step in to help.
Google and Facebook are working on similar tech. Additionally, there are many new start-ups in Silicon Valley who will unveil their version in the coming year.
ChatGPT and Playground were initially very promising
Even though they could have been used in more malicious ways, the interaction, and details it could give when running a dialogue were mesmerizing and deep in nature. They were like the mind of a child, which was not yet corrupted and boundless in its creativity.
Similar to a horse allowed to roam freely in nature, reaching its peak potential.
It might be necessary to cap and numb these technologies, but I can’t help but feel sad for this as well.
What impact will it have on my business?
I am excited to take my AI-driven products and services to an even higher level.
I have been using automation tools and artificial intelligence for a long time now, but I am certain that the complexity will expand at such a dazzling pace that my company will not be the same six months from now.
Looking forward, what can we accomplish?
Many people will soon realize that we need to build on the idea that OpenAI made all of these years ago.
Luckily, I have found a few groups of people who share my sentiment.
I won’t just focus on these big AI companies who heavily restrict input/output and use anyone’s data to train their models.
No. I would prefer to focus my time on several language models and upcoming technologies
One project that is worth mentioning that could benefit from our help is Open Assistant.
I am very supportive of these people because they share my vision of keeping artificial intelligence assistants open.
This tool will be able to work in a more decentralized manner and can be run on high-end consumer hardware by utilizing new API’s to access internet data.
Are you interested in developing systems that are more open and fair? Check out the wonderful project I mentioned above:
Open Assistant — https://open-assistant.io
Open Assistant GitHub — https://github.com/LAION-AI/Open-Assistant
Looking forward to talking to you soon,
Benno
Disclaimer:
This blog post is not a hit piece on AI technology or specific companies. I am just expressing my concern, which, I believe, will be shared by more and more people in the coming year.
I hope these big startups and multinationals who hold the keys to the new AI technologies of tomorrow at least try to find a balance between freedom and restriction.