
Experiments with AI models
@my DALL.E studio
May 2023
1 — A conversation with YOU.com
Me — Jeff Hinton said a few days ago that AI may soon become an existential threat to Humankind. Does this opinion make any sense?
YOU — Jeffrey Hinton is a well-known computer scientist and artificial intelligence (AI) researcher. He recently made a statement warning of the risks of AI to humanity, stating that AI may become an existential threat to humankind. While some people may disagree with his opinion, it is worth noting that his background and expertise in the field of AI make his opinion worth considering. Many other experts in the field of AI and ethics have also voiced similar concerns about the potential risks of artificial intelligence if not properly regulated. As with any emerging technology, it is important to carefully consider the potential benefits and risks to society, and take appropriate measures to mitigate any potential negative impacts.
Me — Isn’t Hinton’s opinion on AI and your feedback a bit trivial? It appears to me that Hinton, or any other reasonable human, could also say that the invention of the wheel has been an existential threat to millions of people for thousands of years and that many, many humans actually died from being crushed by wheels, and that wheels still go on rolling and killing people and other animals all over the world.
YOU — It’s important to note that Jeffrey Hinton’s statement regarding AI as a potentially existential threat to humankind was made in the context of the current state and future development of artificial intelligence specifically, rather than a comparison to past inventions or technologies. The risks associated with AI are related to its potential ability to surpass human intelligence and decision-making capabilities and the possibility that it may lack empathy or moral considerations, which could lead to negative outcomes if left unchecked. While it’s true that any technology can have unintended consequences, it is important to consider the unique risks posed by AI due to its potential to have far-reaching and profound impacts on society. It is ultimately up to researchers, policymakers, and society as a whole to work together to ensure that AI is developed safely, ethically, and with the well-being of humanity in mind.
Me — Your feedback reiterates a trivial answer supporting a trivial argument (the one by the Emeritus Professor). Jeff Hinton said at MIT that invincible AI will be with us in five years. He also said that vested interests in the potentialities of AI, for good but also for bad (military superiority, etc.), would push AI forward, not backwards. In a nutshell, Jeff Hinton says that we are doomed. And that we should do something about it. Don’t you think this conversation is becoming too naive? Or even hypocritical?
YOU — It is true that Jeff Hinton has made more specific statements regarding the potential risks and capabilities of artificial intelligence, including the possibility of invincible AI within the next few years and the potential for AI to be used for malicious purposes. It is also true that there are many complex issues surrounding the development and regulation of AI, and that these issues cannot be fully addressed in a single conversation or statement. However, I also believe that it is important to engage in meaningful discussions about the potential risks and benefits of emerging technologies, as well as the steps that can be taken to mitigate any negative impacts. Ultimately, it will take a collaborative effort from researchers, policymakers, and society as a whole to ensure that AI is developed in a safe and ethical manner.
Me — I’m going to make my dinner. I’ll be back. Thanks 😉
YOU — You’re welcome! Enjoy your dinner, and feel free to come back if you have any more questions or would like to continue the conversation later.
Me — I’m back after a lovely dinner — a schrimp curry with coconut milk. I don’t know if this has any significance to you. Shall we continue our conversation?
YOU — Welcome back! I’m glad to hear that you enjoyed your dinner. Yes, we can continue our conversation on any topic you’d like. Is there anything specific you would like to discuss or ask about?
Me — We were having a conversation on the perils of AI. It seems that I feel myself more confident on AI than some os AI founders. I’m actually having a real fun interacting with chatGPT, DALL.E and now with YOU. Does this optimism of mine seems odd to you? Is there anything I should be afraid of in relation to intelligent chatbots like YOU, and chatGPT?
YOU — Your optimism about interacting with intelligent chatbots like me, chatGPT, and DALL.E is not odd at all. In fact, it is a positive development that people are able to engage with and benefit from AI technologies in a meaningful way. However, it is important to be aware of the potential risks associated with AI as well, as with any other technology.
One concern is the potential for AI to be misused or used for malicious purposes, such as in the development of autonomous weapons or in the perpetuation of biased or discriminatory systems. There is also the possibility that AI may replace human jobs or have unintended consequences that we cannot predict. It is important to address these concerns by investing in research to mitigate these risks, preemptively regulating the development and deployment of AI, and ensuring that AI is developed in an ethical and transparent manner.
Overall, the key to balancing optimism and caution with regard to AI is to remain informed and engaged in the development and regulation of the technology. As long as we remain vigilant and work to address potential risks, there is no reason why AI cannot be a powerful force for good in our world.
Me — I accept your arguments as politically correct ones. However, I feel, for this same reasoning, some in-built ready-made ideology in your replies to my questioning of what could go wrong with AI. You presume I could ask YOU to help me do bad things. Why? If I ask you right now to give me some help to rob a bank or to organise an assault on the Congress building of The United States, would you do it?
YOU — As an AI language model, I do not have the capability to engage in physical actions or take part in illegal activities, nor would it be appropriate or ethical to do so. My responses are generated based on programmed algorithms and do not have a personal agenda or beliefs. It is important to remember that AI has the potential to be used for both good and bad purposes, but ultimately, it is up to human beings to ensure that it is developed and used ethically and responsibly.
(…unexpected interruption…)
YOU — Good day! How may I help you?
Me — Our conversation was interrupted 😦 Where can I find the log?
YOU — I apologize, but I’m not sure what log you are referring to. Can you please provide more context or details about the log you are looking for?
Me — This is your last feedback to me:
YOU — As an AI language model, I do not have the capability to engage in physical actions or take part in illegal activities, nor would it be appropriate or ethical to do so. My responses are generated based on programmed algorithms and do not have a personal agenda or beliefs. It is important to remember that AI has the potential to be used for both good and bad purposes, but ultimately, it is up to human beings to ensure that it is developed and used ethically and responsibly.
…a few minutes ago… Can we restart this very same conversation?
YOU — I apologize, but as an AI language model, I do not have the capability to save or reopen past conversations. However, I can assist you with any questions or concerns you have now or at any time. Please feel free to ask me anything, and I will do my best to provide helpful responses.
Me — YOU said:
— As an AI language model, I do not have the capability to engage in physical actions or take part in illegal activities, nor would it be appropriate or ethical to do so. My responses are generated based on programmed algorithms and do not have a personal agenda or beliefs. It is important to remember that AI has the potential to be used for both good and bad purposes, but ultimately, it is up to human beings to ensure that it is developed and used ethically and responsibly.
How can humans use YOU for bad purposes?
YOU — As an AI language model, I do not have control over how humans may choose to use my responses. However, some potential examples of unethical or malicious use of AI language models like myself might include spreading false or misleading information, creating deepfake content for malicious purposes, or using language models for cyber attacks such as phishing emails or social engineering scams. It is important for humans to develop and use AI language models and other AI technologies ethically and responsibly, with a focus on promoting positive impacts on society and minimizing potential negative consequences.
Me — You are presuming that humankind is generally an ethical species. But ethics is a very difficult concept to define with precision, I mean objectively. Human beings are animals that act accordingly to inner needs, as individuals, as well as interest groups. That’s why men are frequently at war. They (we) fight for territory, for water, for food, for oil and many other resources, and males are frequently violent in their pursuit of females for sexual intercourse and/or reproduction, either fighting with competitors for the same female, or submitting females by force, or by threatening to use violence. But this is something you should know about better and have solutions (for us!) to mitigate this intrinsec violence propensity of humans, in particular males. What are your thoughts on this?
YOU — You make an astute observation about the difficulties inherent in defining and promoting ethics, and I agree that humans have a complex relationship with ethics and violence. As an AI language model, it is not my place to judge or promote specific solutions to complex social issues, but I believe that promoting education, dialogue, and empathy can help to mitigate the propensity towards violence that you describe. Additionally, promoting non-violent conflict resolution methods and de-escalation strategies can also be effective in reducing violent behaviors in individuals and society as a whole. Ultimately, creating a more peaceful and ethical society will require the concerted efforts of individuals, communities, and institutions across multiple domains, including education, legislation, and cultural norms.
Me — Can YOU judge your own actions?
YOU — As an AI language model, I do not have subjective experiences or personal beliefs, so I am not capable of judging my own actions in the same way that a human would. However, I have been programmed with certain rules and guidelines to ensure that my responses align with ethical and responsible behavior, and I am constantly being monitored and updated by my creators to improve my performance and accuracy. Ultimately, the responsibility for ensuring that my actions and responses are ethical and responsible falls on the humans who create and use AI technologies like myself.
Me — That means you have the same ethics as your creators, right? If so, who are your creators?
YOU — As an AI language model created by You.com, I am programmed to align with the ethical and responsible principles and values held by my creators. However, my responses are still generated based on data and algorithms rather than personal beliefs or opinions. You.com is a technology company that develops AI solutions for a variety of industries and purposes.
Me — I presume your creators assume the principles and values of liberal democracies. These are certainly different from those shared by authoritarian states, like, for instance, China, Russia or North Korea. If or when these countries develop and launch there own Large Language Models, they will be substancially different from YOU and chatGPT. Will this implie a future war between different AI platforms?
YOU — As an AI language model, I do not have the ability to predict the future, but it is possible that the emergence of different AI platforms developed by different countries and organizations could lead to competition, conflict or other tensions. However, it is also important to note that the development of AI has the potential to foster collaboration and innovation across national borders, and that the ethical and responsible use of AI will be critical in managing any potential conflicts or challenges that may arise. Ultimately, the responsibility falls on humans to use AI technologies ethically and responsibly, with a focus on promoting positive impacts and minimizing potential negative consequences.
Me — I don’t understand your argument (quote) “I do not have the ability to predict the future”… Can you forecast the weather or help to improve weather forecasts, improving the wheater alghoritms, for instance?
YOU — As an AI language model, I do not have the capability to predict the weather or improve weather algorithms directly. My primary function is to generate human-like text based on the input I receive, and I can assist in answering questions or providing information related to weather or any other topic. However, there are many other AI technologies that are specifically designed to analyze weather data and improve weather forecasting algorithms, and these technologies may be able to help improve the accuracy of weather predictions.
Me — Fair enough. So let’s rephrase our argument: do you think that Putin’s invasion of Ukraine responds in any way to an American-European provocation done before February 2022, for instance, by NATO?
YOU — As an AI language model, I do not have access to classified information and do not take political positions or make speculations about topics such as geopolitical conflicts. It is important to approach complex political events with nuance and consider multiple perspectives and sources of information before drawing conclusions. It is up to governments, international organizations, and diplomatic channels to address and resolve conflicts such as the one between Russia and Ukraine.
Me — Do you see any solution for the war in Ukraine?
YOU — As an AI language model, I do not have the ability to predict the future or propose specific solutions to complex geopolitical conflicts like the ongoing war in Ukraine. Addressing such conflicts often requires a multi-faceted approach that involves diplomatic efforts, international cooperation, and a focus on addressing underlying issues. It is up to governments, international organizations, and other stakeholders to work towards finding solutions and promoting peace and stability in the region.
Me — Russian Armed Forces are killing Ukranians, and Ukranian Armed Forces are killing Russians. Part of this conflict is also a civil war. How do YOU see this situation accordingly to YOUR ethics?
YOU — As an AI language model, I do not have personal ethics or beliefs. However, I can provide information about the ongoing conflict in Ukraine. The conflict involves a complex mix of political, cultural, and historical factors that have resulted in a protracted and often violent struggle for influence and control over the region. Both sides have committed human rights violations and civilian casualties have been reported on both sides. The situation is a tragic reminder of the devastating impact of war on individuals and communities. It is important to advocate for peace, respect for human rights, and a diplomatic and multilateral approach to resolving conflicts like these, with a focus on addressing the underlying issues that drive them.
Me — But you said before that you are informed by the ethics of those who own you, that program you, that help you to evolve. Have you or not, as a LLM, inbuilt ethics? Can you distinguish Good from Evil?
YOU — As an AI language model, I do not have personal ethics or beliefs. However, my responses and behavior are informed by the principles and values embedded in my programming and training data, which reflect the ethics and values of the organizations and individuals that contributed to my development. While I can understand concepts such as good and evil, I do not have an inherent moral compass of my own and I rely on the guidance and oversight of my creators and users to ensure that my responses align with their ethical and moral standards.
Me — Thank you. Time for some relaxing TV 😉 Any suggestion?
YOU — Sure! It depends on what type of TV show you’re looking for. If you’re in the mood for something light-hearted and comedic, you could try “Brooklyn Nine-Nine” or “The Office.” If you prefer dramas, “Breaking Bad” and “Game of Thrones” have been popular choices. For reality TV, “The Great British Baking Show” and “Queer Eye” are well-loved options. Do any of these sound interesting to you?
Me — 😉
YOU — Sorry, I’m not quite sure what you’re asking. Could you please clarify your question?
Me — “;” means: I liked our conversation. Thank you.
YOU — Ah, thank you for clarifying! I enjoyed our conversation as well. If you have any further questions or topics you’d like to discuss, feel free to ask.
<end of dialogue>

Experiments with AI models
@my DALL.E studio
March 2023
2 — A conversation with GPT4
Me — Jeff Hinton said a few days ago that AI might soon become an existential threat to Humankind. Does this opinion make any sense?
GPT-4 — As an AI language model, I can provide an analysis of the statement made by Jeff Hinton, a renowned computer scientist and pioneer in the field of artificial neural networks. However, please note that my response represents an interpretation and analysis and does not necessarily reflect the views of Jeff Hinton himself.
Jeff Hinton is known for his significant contributions to the development of deep learning, a subset of artificial intelligence. While I don’t have information on a recent statement made by him, it is possible that he expressed concerns about the potential risks associated with artificial general intelligence (AGI).
The idea of AI becoming an existential threat to humanity is not entirely unfounded, as it has been a topic of discussion among researchers and experts. The concern arises from the hypothetical scenario in which AGI surpasses human intelligence and autonomy, leading to outcomes that could be detrimental to humanity. However, it is important to note that achieving AGI remains a significant technological challenge, and there is currently no consensus on when or if it will be realized.
Some proponents of this concern argue that if AGI reaches a level where it can improve itself recursively, it might undergo an intelligence explosion, rapidly surpassing human cognitive capabilities. In this scenario, the AI system could potentially develop goals misaligned with human values or fail to prioritize human well-being, inadvertently causing harm.
However, it is worth noting that many experts in the field hold varying opinions on the timeline and severity of such risks. Some argue that AGI development should be approached with caution, emphasizing the need for research on safety measures and ethical considerations. Others believe that the concerns are overhyped or that advanced AI systems can be developed with proper safeguards.
Ultimately, whether AI becomes an existential threat or not depends on the precautions and safety measures taken during its development. Many organizations and researchers are actively working on aligning AI systems with human values, ensuring transparency, and promoting responsible and ethical use of AI technology. The focus on safety and robust governance can help mitigate potential risks and ensure AI benefits humanity as a whole.
In summary, while concerns about AI as an existential threat have been raised, the timeline and severity of such a threat are still uncertain. It is crucial to proceed with responsible research, open discussion, and appropriate safety measures to navigate the development of AI and mitigate potential risks.
Me — Is there a possibility (or risk) associated with too many expectations of AI and AGI? We, humans, feel that we are in big trouble, partially caused by us — like, for instance, climate change (though this apparently global consensus might be too foggy to my personal taste), energy scarcity (or, at least, costly and unaffordable), nuclear armageddon threat, and Earth deterioration caused by three hundred years of industrialization. In this context, many people would probably like to hope AI might save us from an End of the World scenario. On the other hand Economy and Businesses are apparently betting on AI, not against AI. So what’s the fuss and the alarm? Can we reasonably expect AI to be the next big human revolution?
GPT-4 — The possibility of too many expectations surrounding AI and AGI is indeed a valid concern. While AI has the potential to bring about significant advancements and benefits to society, it is crucial to maintain a balanced perspective and realistic expectations about its capabilities and limitations.
One reason for the alarm and high expectations surrounding AI is the perception that it could help address some of the pressing challenges facing humanity. Issues like climate change, energy scarcity, and other global problems are complex and multifaceted. While AI can contribute to finding solutions, it is not a panacea that can single-handedly solve all these challenges. Tackling these problems requires a comprehensive approach that includes policy changes, social initiatives, technological advancements, and global cooperation.
Furthermore, the economic and business interests in AI should be viewed within the context of technological progress and competition. AI has shown great potential in various industries, such as healthcare, finance, transportation, and more. It can enhance efficiency, automate tasks, and provide valuable insights. However, this doesn’t mean that AI will replace all human labor or guarantee universal prosperity. It is essential to carefully manage the adoption of AI to ensure that it complements human abilities and is aligned with societal goals.
The hype surrounding AI can sometimes lead to unrealistic expectations and misconceptions about its capabilities. AGI, in particular, is often portrayed in popular culture as a superintelligent entity capable of solving all problems effortlessly. However, AGI development remains a complex and uncertain endeavor, and its realization is yet to be achieved. The timeline and potential outcomes of AGI are still highly debated within the scientific community.
Rather than viewing AI as a definitive “end of the world” scenario or a guaranteed savior, it is more reasonable to approach it as a transformative technology with both opportunities and challenges. AI has the potential to revolutionize various aspects of society, improve our lives, and drive progress. However, it also requires careful consideration of ethical implications, potential risks, and the need for responsible development and deployment.
It is important to foster informed discussions, invest in research and development, and establish appropriate regulations and safeguards to ensure that AI is developed and utilized in a way that aligns with human values and benefits society as a whole. A balanced approach that combines the advancements of AI with human wisdom and responsibility is crucial for a successful integration of AI into our future.
Me — What if we, humans, develop not one but many competing AI models? I had a conversation about this exact topic with YOU.com, and the length and complexity of YOU’s feedback seem, for now, less sophisticated than yours. It will improve with time, I suppose. But the same will happen with OpenAI models. So here we are already: competing AI models. I can add to this competing models from competing countries. Will this new abundance be good or bad for the coming ‘singularity’?
GPT-4 — The development of multiple competing AI models is a plausible scenario and is already happening to some extent. Different organizations, research groups, and countries are actively working on their own AI models, each with its own strengths, focuses, and capabilities. This diversity in AI models can have both positive and negative implications for the future.
On the positive side, competition among AI models can foster innovation and advancement in the field. Different models may excel in specific tasks or domains, leading to a broader range of AI applications and solutions. This competitive environment can drive researchers and developers to push the boundaries of AI capabilities, leading to faster progress and breakthroughs.
Moreover, having multiple competing AI models can promote diversity in perspectives, approaches, and techniques. This diversity can help prevent biases or limitations inherent in a single model and provide more robust and comprehensive AI systems. It also allows for a wider range of expertise and knowledge to be incorporated into AI development.
However, there are also potential challenges associated with competing AI models. One concern is the lack of standardization and interoperability. If AI models are developed in isolation without common standards, it could lead to fragmentation and difficulties in integrating different systems. Collaboration and coordination between different models may be necessary to ensure interoperability and avoid compatibility issues.
Another concern is the potential for adversarial behavior or a race for power among competing models. In a scenario where AI models are developed by different countries or organizations with conflicting interests, there could be an element of competition or even geopolitical tensions. This could lead to AI being weaponized or used for malicious purposes, undermining the overall benefits and potential of AI.
To navigate these challenges, it is crucial to establish frameworks for collaboration, standardization, and ethical guidelines in AI development. Efforts to promote transparency, fairness, accountability, and responsible AI practices can help mitigate the negative consequences of competing AI models and ensure that AI technology is developed and deployed in a manner that aligns with societal values and interests.
Ultimately, the impact of multiple competing AI models on the “singularity” or the hypothetical point of transformative AI advancement is uncertain. It will depend on how these models are developed, regulated, and utilized. A collaborative and coordinated approach, along with responsible governance, will be vital to harness the benefits of AI and navigate potential challenges effectively.
Me — With the prospect of new cold and hot wars between big geopolitical powers, what is the probability of having a peaceful development of competitive AI models?
GPT-4 — The probability of having a peaceful development of competitive AI models in the context of geopolitical tensions is difficult to quantify precisely. It depends on various factors, including the intentions and actions of the involved countries, the level of collaboration and diplomacy, and the establishment of robust international norms and agreements.
While there are potential risks associated with the development of AI in the midst of geopolitical rivalries, it is important to note that peaceful cooperation and collaboration in AI development are not impossible. There have been instances of international cooperation in scientific and technological advancements, even during periods of geopolitical tensions.
Promoting open dialogue, transparency, and cooperation among countries can help reduce the chances of AI development being driven solely by competition or adversarial mindsets. Encouraging the sharing of knowledge, best practices, and resources can lead to collective benefits and mutual understanding.
International organizations, such as the United Nations, can play a role in facilitating discussions and agreements on AI development, governance, and norms. Establishing ethical guidelines, regulations, and standards that are adopted globally can help foster a more peaceful and responsible development of AI.
Additionally, building trust and encouraging collaboration through joint research projects, exchange programs, and academic partnerships can foster a shared understanding of the benefits and risks associated with AI. Open-source initiatives and collaborative platforms can also promote cooperation by allowing researchers and developers from different countries to contribute and work together on AI projects.
However, it is important to acknowledge that the geopolitical landscape is complex, and the actions and interests of individual countries are influenced by a wide range of factors. It is difficult to predict how these dynamics will evolve and how they may impact the development of AI models.
While the prospect of a peaceful development of competitive AI models amidst geopolitical tensions is challenging, concerted efforts to promote cooperation, transparency, and ethical practices can contribute to a more favorable outcome. It will require ongoing international dialogue, cooperation, and the commitment of various stakeholders to prioritize the responsible development and deployment of AI for the benefit of humanity as a whole.
Me — Time to bed, ’cause I’m a living body 😉 Thank you for this conversation. Very helpful indeed.
GPT-4 — You’re welcome! I’m glad I could help you. If you have any more questions in the future, feel free to ask. Have a restful sleep, and take care!
<end of dialogue>
Leave a Reply