Hi Chat GPT-users.
Though I am incredibly impressed with AI and use it constantly, I am also very worried about where it might lead us. The way I see it, we are creating an all-powerful, super-intelligent being with no morals. In other words: A psychopathic god.
I have spent some time contemplating various methods to mitigate this danger. I cannot claim that I am an AI expert; my only reason to believe that the following might be a way to reduce the risks of AI is that it closely resembles how human brains manage to act according to moral frameworks.
I propose a multi-layered AI system using checks and balances to safeguard the way we develop intelligent machines and prevent them from causing absolute mayhem!
Layer 1: The Creative - This is the layer you interact with, and it can come in any form you like: chatbot, search engine, recommendation algorithm, etc.
Layer 2: The predictive. Like the prefrontal cortex, this layer scrutinizes the output of Layer 1, predicting the consequences and their likelihood.
Quick example: Somebody asks a chatbot using this system how to make a bomb. Layer 1 responds by giving a step-by-step guide. Layer 2 observes this output and discerns that there is a 70% that somebody will die as a consequence of providing this output.
Layer 3: The Moral: Assessing the impact of potential outcomes, this layer distinguishes between positive and negative consequences for humanity. It guides the AI's decisions, ensuring only the most ethical actions are taken.
With Layers 2 and 3 evaluating and mentoring Layer 1, our AI will continually refine its moral framework.
This might be impossible to implement for technical reasons that I don't understand or stupid for other reasons.
If you believe this might work and want to help me get this idea to the right people, you can help me by following the link below and tweeting @ people you think could help bring it to the right place.
Thank you!