Published on

Hopes and concerns with new mainstream AI

Authors

I've got some hopes and concerns after recent releases in more mainstream AI applications and products. Yes pretty scary yet still exciting.

Some might not realize is that all the new consumer tech with AI integrations are a culmination of years of research from all the way back in the 1930s when Reinforcement Learning was first invented in developmental and behavioral psychology studies and then in the 60s when RL methods were introduced in AI systems. This was the period where RL became a full blown interdisciplinary field converging philosophy, psychology, cognitive science, and computer science.

And only in recent years have we reached the cusp of AI trained with methods of RL being integrated into more facets of our lives.

Most definitely I’m a little weary how much agency we will continue to have on our own thoughts, patterns and behaviours. There will come a time when we can no longer can discern whether our thoughts truly belong to us, as the AI we have fed with copious amounts of data transcends into its own patterns and predictions. It’s what this “inferring of cognitive state” method is doing - making deductions and inferences from the data we feed directly into the systems. The cycle begins as we act and think in certain ways, which then trains the preference model and perpetuates the endless cycle. This continuous loop narrows our window of choice, limiting our autonomy and shaping our experiences in ways we may not fully understand.

This is not the same leaps and bounds from the internet boom, this stage of tech advancements will fundamentally change our society, lifestyles, and human behavior as we know it today.

However, amidst these concerns, I am also somewhat hopeful. Many questions are still being asked, and ongoing research is being conducted around AI safety and alignment. My hope is that AI can continue to enhance our lives and all the while we can preserve our sense of self, consciousness and individuality as a human race.