It’s SIGGRAPH week, that time of year when different industries — including architectural design, medical and scientific visualization, and VFX and filmmaking — converge in one place to consider the present, future, and blue-sky trajectory of computer graphics. It’s easy to get dazzled by the wild creativity and high-level engineering on display; it can be a little harder to figure out what the concrete implications are for the foreseeable future of media and entertainment. Based on what we know about the show, here’s a breakdown of some of the more relevant topics under discussion in 2017.

1) AI and Machine Learning Are Getting Practical.

Full-blown intelligent robots remain the stuff of science-fiction, but some recent developments in AI research have been at least intriguing, if not game-changing. Researchers at Facebook’s Artificial Intelligence Research Lab noted that chatbots in conversation with one another had “developed their own language for negotiating.” It’s not quite as dramatic as it sounds — the “language” mainly consists of repetitive combinations of words that read as gibberish to humans but apparently convey an encoded meaning when unpacked by efficiency-seeking chat agents. Likewise, “deep learning” projects that allow computers to generate striking or bizarre new images — like the GIF above, which mashes up President Donald J. Trump and a pile of ground beef —  aren’t going to put VFX artists out of business, but they suggest ways that computers can be used under the guidance of artists to develop and execute unusual effects. And they do point toward the possibility of eliminating more of the gruntwork — think roto-like jobs of image clean-up, object removal, and even depth-mapping — that’s traditionally essential to the VFX industry. However it shakes out, expect clever uses of deep learning and other areas of AI research to have an increasing impact on media and entertainment, and look for interesting examples at SIGGRAPH.

2) The Cloud Makes More Sense Every Year.

Amazon’s recent acquisition of Thinkbox, which is now part of its Amazon Web Services offerings, shows that Amazon — like Google, which boasts its own Zync cloud rendering service — believes content creation will be a lucrative part of the mix as studios move toward hybrid cloud or, in some cases, all-cloud pipelines. This field remains unsettled, with best practices for favorable ROI and policies for licensing required software tools to work in the cloud still developing. But while migration to the cloud is tentative and slow, it is absolutely accelerating. Look for increased support and adoption of cloud workflow at SIGGRAPH and beyond, as studios working in a notoriously difficult, low-margin industry look for ways to make sure they have capacity to get them out of those inevitable overcapacity tough spots without investing in infrastructure that will go unused outside of crunch times. A big question: How will licensing of software tools (from render management software to workaday pipeline tools) be handled in the cloud? Watch for new solutions addressing these issues to come out of the show.

3) VR Has Passed a Huge Milestone, but There’s Still a Long Road Ahead.

2016 was truly the year of VR, with highly anticipated headsets — Oculus Rift, HTC Vive, and PlayStation VR — finally shipping to paying consumers. For all the financial investment, feverish development, overstated hype and, often, genuine delight that are wrapped up in first-generation VR experiences, the technology is a long way from meeting its potential. There are two big problems — first, quality VR experiences demand an enormous amount of overhead, both in terms of computing power and bandwidth. Current headsets, running at something like HD resolution for each eye, still deliver fuzzy video with aliasing and other visible flaws. An improved VR headset might have 4K resolution for each eye. But how do you stream the equivalent of 8K video to a consumer-friendly device? A stopgap solution may be to track the user’s gaze and deliver only the area under scrutiny in such high resolution, streaming the rest of the environment at lower res to reduce overall bandwidth demands. Of course, many observers feel AR will have a bigger long-term impact than VR, meaning we still have a long, long way to go when it comes to immersive goggle design. SIGGRAPH’s VR Village is considering these ideas and more, with innovative exhibits like Bridget, the mixed-reality robot; a demonstration of “Hallelujah,” an experience captured with the Lytro Immerge VR rig; and Disney’s Magic Bench, a HMD-free installation allowing users to interact with CG characters from a third-person POV that eliminates the need for a headset — and the social isolation that comes with it.

4) Film and TV Production Goes Real-Time

2017 is the year the Unreal game engine started to make a real splash in content creation beyond the games market. At NAB, Epic Games showed Unreal Engine working in broadcast production applications from Ross Video, Vizrt, and more. Unreal is also gaining favor in feature film production, where directors and VFX supervisors can use it for high-quality previs — or can even generate full-length animated films inside Unreal Engine. At SIGGRAPH tonight, Epic is holding its first Unreal Engine User Group at the Orpheum Theatre. It seems like a safe bet that Epic has something new and interesting to show the content creation community — Epic Enterprise GM Marc Petit, for many years a VP at Discreet and Autodesk, promised “workflow tools that simplify pipelines considerably” in a statement. Beyond what Unreal is up to, SIGGRAPH’s Real-Time Live! event is traditionally the place to see the latest in interactive techniques. This year, it looks like you’ll be able to see it streaming live starting at 6 p.m. PT on Tuesday, August 1. (For an index of SIGGRAPH live streaming sessions, check this page.)

5) More Than Ever, Your Eyes Deceive You

As CPUs and GPUs become more powerful and photorealistic animation techniques become more sophisticated, video is about to become nearly as malleable as photographs once were. Before there was Photoshop, there was the airbrush, along with various darkroom techniques that could be used to alter the reality of a camera image captured on a photographic negative. Moving images have proven much harder to fake; it’s hard enough to make an edit that’s seamless for a single image; it’s a lot harder to do it in a way that holds up temporally, as a series of frames flicker by at high speed. But new techniques are increasingly able to make seamless substitutions based on lip-sync libraries, facial recognition, machine learning, and other emerging technologies. The University of Washington’s “Synthesizing Obama” [PDF] project is one example. (Watch the video here.) Other techniques for generating facial animation from audio of human speech will be presented in various technical papers at the show. If you thought #fakenews was a trending topic now, wait until hackers and video pranksters discover this shiny new way to manipulate public opinion.