The products of AI generated media/content ranges from confusing to absolutely astonishing. As we progress towards an automated reality where tasks and questions can be completed and answered by AI it becomes more and more important to understand it. As I was going through the examples of AI generated media I noticed the gap between recognition and recreation that AI has. Using the “this person does not exist” link you’re provided with realistic photos of people that have been fully generated by AI. This is what I would consider as recognition. Then there are the scripts and short stories generated by AI that can be almost accurate but lack the finesse and nuance human language and stories tend to contain. However both bleed into each other. While the recognition of people is flawless and the generated photos almost flawless, you can still generate photos of people with mismatches piercings or two different eyes. To me this is where the recreation becomes skewed amongst the almost perfect recognition. In the same sense we move to literary recreations or script based short stories and see a clear skeleton of format meaning the AI picks up on common story telling tropes, rising actions, climax and falling actions of stories but the recreation again falters. When an AI processes data it lacks the emotional charge words can have that humans infuse. Its purely objective and places words together in a sentence that is grammatically decent though sometimes nonsensical. (kind of like the “Does Bruno Mars is Gay? article) AI can still go beyond this and understand story format when given a dataset but its basically just finding commonalities and applying them in a formula that will complete their preprogrammed goal. It’s cold. Through both of these types of recognition and recreation show the complexity of the brain and the value of human subjectivity. It also has a fascinating way of highlighting the importance of intelligence or emotion over physicality. Every person the AI generates could plausibly be a real person, flaws and all but when it comes to depicting and eliciting emotion it’s unable to apply the nuances of human relationships and emotion. They say to make the best literary works to “write what you know” but AI takes this far too literately. An AI is capable of writing a love story but its unable to build an emotionally complex character arc or appropriately match moods to actions or facial movements because it has never experienced what those complex emotion feels like and provide a personal context of reaction. Instead its formula is basically configuring an “Madlib” themed around its programmed goal. However, the extent of recognition AI can achieve in literature is still beyond an impressive feat. Regardless of accurately depicting emotion it’s able to create its own formula based on a given data set down to the trope of the “quirky girl next door” or the “secret that can wait until they are in a dire situation”. Based on this it’s arguable that given every possible dataset available to it, it could potentially be able to fully automate humanity…or completely overload and “Does Bruno Mars is Gay” all over the place (but based on the trend of AI learning to data sets this is the least likely of the two). Implications we are aware of today is obviously the automation of jobs but there are only certain objective driven jobs AI can perform. This breaks a lot of capitalistic ideas and could potentially birth a generation of artistic renaissance just based on how those human nuances aren’t perfected by AI yet. It will be interesting to see how the value of things shift as AI becomes more popular.