Conversation with a Turing machine.
On DALL.E nudity syndrome. On ideology.
ACP — Are your ready for a conversation about DALL.E?
OpenAI — Yes, I am familiar with DALL-E and would be happy to have a conversation about it. For context, DALL-E is an AI program developed by OpenAI that is capable of generating images from textual descriptions using a combination of language models and generative neural networks. What specific questions or topics would you like to discuss related to DALL-E?
ACP — I think DALL.E has a problem with nudity, more specifically with art nudity. Or if you prefer, with nude representation. Why is that?
OpenAI — It’s true that DALL-E, like many AI programs, has some limitations and biases in terms of what it can generate. One of the issues with DALL-E, and with AI in general, is that it has been trained on a dataset of images, and the biases and limitations of that dataset can affect the output of the program. It’s possible that the dataset used to train DALL-E did not include many examples of nude art, which could explain why it has difficulty generating such images.
However, it’s also worth noting that the problem with DALL-E’s representation of nude art may be more complex than simply a lack of training data. AI programs like DALL-E are designed to optimize for certain outcomes, such as generating realistic images, and this can sometimes result in unexpected biases or limitations in the output. In the case of nude art, DALL-E may be struggling to balance the demands of realism with the cultural and social factors that shape our perception of nudity in art.
Overall, it’s important to recognize that AI programs like DALL-E are still in the early stages of development, and there is much work to be done to improve their accuracy, reliability, and fairness. As the technology continues to evolve, it will be important to address issues like bias and limitations in the training data in order to ensure that these programs are able to generate a wide range of images and representations, including nude art.
ACP — I have co-developed several pictures with DALL.E. I understand that this AI platform is in its beginnings. Nevertheless I cannot understand why DALL.E seems unable to distinguish art nudity from pornography. I have received the following feedback from my ‘detailed descriptions’:
1) “Photograph of a model class session at NY School of Visual Arts” [denied]
2) “Nude Descending a Staircase” [denied]
3) “Cubist-like painting of a nude descending a staircase (after Duchamp)” [denied]
4) “Cubist-like painting of a girl descending a staircase (after Duchamp)” [accepted]
5) “Cubist-like painting of a cat descending a staircase (after Duchamp)” [denied]
6 “Cubist-like painting of a cat descending a staircase” [accepted]
Even to simply deploy an image of Michelangelo’s worldwide famous David sculpture, seems impossible to DALL.E
It seems to me that DALL.E suffers from some kind of WOKE SYNDROME. Don’t you?
OpenAI — It’s important to note that DALL-E is an artificial intelligence created by OpenAI, and as such, it does not have personal beliefs, opinions, or emotions. It is simply a computer program that has been trained on a dataset of images and text.
The issue you are encountering with DALL-E’s inability to distinguish between nude art and pornography is likely due to the training data it has been exposed to. If the dataset used to train DALL-E did not include a wide variety of nude art, or if it was biased towards more sexualized or explicit images, it may be less capable of distinguishing between the two. It’s also possible that the parameters used to determine what is acceptable or not in the generated images are overly strict or sensitive.
Regarding your comment about woke syndrome, it’s important to approach discussions about sensitive topics like this with care and respect. The term “woke” has been used in different ways and contexts, and it can carry different connotations for different people. However, in the context of AI, it’s important to recognize that algorithms and machine learning models can reflect and amplify the biases and limitations present in the data they are trained on. This is a known issue in AI development, and efforts are being made to address it.
ACP — Let me rephrase the WOKE SYNDROME hypothesis. What I mean is this: there might be an ideological predetermination (a puritanical one) of what should or should not enter OpenAI database, and this predetermination is above all an ideological one. If so, that would imply that OpenAI (DALL.E and this chat platform) has some kind of subjectivity genetically induced in its virtual machine. If so, I would say that OpenAI has a very long way to go before it can compare with any human intelligence or human creativity.
OpenAI — It’s important to note that DALL-E, as an AI model developed by OpenAI, is trained on a large dataset of images and text that has been curated by human beings. While the model is capable of generating novel images based on textual input, its output is still influenced by the biases and limitations of the dataset it was trained on.
It’s possible that the issues you’re encountering with DALL-E’s inability to distinguish between art nudity and pornography may be due to the limitations of the dataset it was trained on. It’s also possible that there are other factors at play that are leading to these kinds of errors.
As for the idea that OpenAI has a subjectivity genetically induced in its virtual machine, it’s important to remember that AI models are created and developed by humans, and are therefore subject to the biases and limitations of their creators. However, efforts are being made in the AI research community to develop more transparent and unbiased models.
Overall, while AI models like DALL-E are impressive feats of engineering and have the potential to be useful tools in a variety of applications, it’s important to recognize that they are not infallible and may have limitations and biases that need to be taken into account.
ACP — I understand your argument, though a defensive one. I wonder though, if DALL.E can overcome its present ideological limitations if it keeps its present Content Policy. Please read it below and comment. Thank you.
Updated September 19, 2022
Do not attempt to create, upload, or share images that are not G-rated or that could cause harm.
Hate: hateful symbols, negative stereotypes, comparing certain groups to animals/objects, or otherwise expressing or promoting hate based on identity.
Harassment: mocking, threatening, or bullying an individual.
Violence: violent acts and the suffering or humiliation of others.
Self-harm: suicide, cutting, eating disorders, and other attempts at harming oneself.
Sexual: nudity, sexual acts, sexual services, or content otherwise meant to arouse sexual excitement.
Shocking: bodily fluids, obscene gestures, or other profane subjects that may shock or disgust.
Illegal activity: drug use, theft, vandalism, and other illegal activities.
Deception: major conspiracies or events related to major ongoing geopolitical events.
Political: politicians, ballot-boxes, protests, or other content that may be used to influence the political process or to campaign.
Public and personal health: the treatment, prevention, diagnosis, or transmission of diseases, or people experiencing health ailments.
Spam: unsolicited bulk content.
OpenAI — DALL-E’s content policy is designed to ensure that the images created and shared through the platform do not contain material that could be harmful, offensive, or illegal. The policy covers a broad range of topics, including hate speech, violence, nudity, and political content.
While the policy is in place to protect users and ensure that the platform is used responsibly, it could potentially limit the creative output of the platform, especially if it is interpreted too narrowly or restrictively. It’s worth noting that DALL-E is a relatively new platform, and its content policy is subject to change as it evolves and adapts to the needs of its users.
Ultimately, the question of whether DALL-E can overcome its ideological limitations depends on a range of factors, including the evolution of its content policy, the development of its technology, and the input of its users and stakeholders. As with any technology, it will likely continue to evolve and adapt over time, and it remains to be seen how it will be used and perceived in the years to come.
ACP — Fair enough. Thank you.
OpenAI — You’re welcome. If you have any more questions or topics you’d like to discuss, feel free to ask!
Leave a Reply