Have You Seen This AI Avocado Chair? | by Travis Tang (Voon Hao) | Jan, 2021

[ad_1]


Now, Dall-E might seem fascinating, but it is by no means perfect. Here are three specific instances where Dall-E fails.

Dall-E might fail on some texts.

Dall-E might work very well on certain phrases but might fail on semantically equivalent texts. That is why the authors of Dall-E sometimes need to repeat themselves in their prompt to increase the success rates of the image generation task.

Dall-E may fail on unfamiliar objects

Dall-E is sometimes unable to render objects that are unlikely to occur in real life. For instance, it is unable to render a photo of a pentagonal stop sign, instead showing results of a conventional octagonal stop sign.

A confused Dall-E not knowing what a pentagonal stop sign look like. Image by OpenAI [1]

Dall-E may take shortcuts

When prompted to combine objects, Dall-E might instead opt to draw the images separately. For instance, when prompt to draw an image of a cat made of a faucet, it instead draws a faucet and a cat side-by-side.

Cat + Faucet = ? Image by OpenAI [1]

OpenAI previously licensed GPT-3 technology to Microsoft. Similarly, OpenAI might license DALL-E to tech giants who will have the resources to deploy and govern DALL-E effectively.

Dall-E is extremely powerful and conceivably has monumental societal implication. If generative adversarial networks (GANs) can be used to generate deep fakes, imagine the possible negative repercussions of DALL-E in generating fake images and proliferating fake news.

We need a way to regulate DALL-E in the age of fake news and photos. Photo by Markus Winkler on Unsplash

Some of the ethical issues that Dall-E need to address are hairy and ambiguous. Such issues need to be addressed before we can expect a widespread adoption of this artificial intelligence technology in our everyday lives. These complex issues include

  1. Potential bias in model outputs (what if DALL-E under-represents minority groups?)
  2. Model safety and accidents (i.e. how do we prevent the AI from accidentally doing something)
  3. AI Alignment (i.e. getting the model to do what you want it to do)
  4. The economic impact on certain professions (think about the illustrators whose jobs might be displaced by such technology.)

That aside, the deep learning and machine learning space is an extremely exciting place to be in right now. Check out more about OpenAI in their page here.

Read More …

[ad_2]


Write a comment