Adobe is using its Creative Cloud-centered Adobe Max conference to roll out new features in Premiere Pro, Premiere Rush, After Effects, Audition, and Character Animator.
The company is also looking to load up your iPad. Today it launched the long-promised Photoshop on iPad alongside Adobe Aero, a new app designed to help designers create AR experiences. And Audition confirmed that a version of Illustrator will debut on the iPad sometime next year, along with a new mobile camera app for iOS and Android called Photoshop Camera.
The biggest news for editors is arguably the Auto Reframe effect, which uses Adobe’s Sensei AI technology to analyze and reframe footage for different aspect ratios. This eliminates the need to scale video manually and/or keyframe position changes for widescreen footage that’s destined for reuse on social media platforms that favor square imagery, or to scan a vertical video frame for 16×9 display. (Keyframes can be edited as necessary after the effect is applied.) Graphic elements (such as title overlays or lower thirds) are kept in frame and other edits are automatically incorporated.
Auto Reframe can be applied to individual frames, or to an entire timeline, which generates a new, reframed sequence. If motion keyframes have already been added to individual shots, those decisions can be preserved by choosing an option to nest the clips on the timeline. If the clips are not nested, the new keyframes will replace any existing motion adjustments.
Graphics workflow has been improved with some new options, including the ability to rename shape layers in the Essential Graphics panel, a button for underlining text, and an option to include multi-line text fields that can be edited in Premiere Pro in motion graphics templates created in After Effects. Those After Effects-generated templates can also use drop-down menus to allow Premiere Pro editors to select from different styles or graphic elements. And new keyboard shortcuts have been enabled in Premiere Pro’s Essential Graphics panel. Check out all of the Premiere Pro keyboard shortcuts here.
More new features improve multichannel audio mixing in Premiere, including a Loudness Radar effect designed to help ensure that the volume of a sequence is within broadcast standards.
Finally, Adobe is claiming performance improvements for H.264, H.265 and ProRes on both MacOS and Windows, along with support for Canon XF-HEVC files, Canon EOS C500 Mark II footage, and Sony Venice V4 footage.
For more detail, see Adobe’s complete feature summary.
Performance has been tuned up in After Effects thanks to improvements to processor threading and new GPU acceleration that speeds up previews. Also new: support for importing layered EXR files as compositions; performance improvements for projects with shape layers, including easier control of shape grouping; the ability to use expressions to make global changes to multiple text layers in a project; improvements in the expression editor and performance improvements for comps that use the same expression multiple times.
Content Aware Fill has been improved to reduce memory usage by 66%, increasing performance by 10% to 25%, Adobe said.
Like Premiere, After Effects now supports Canon XF-HEVX and has better playback support for 10-bit H.265 and HEVC files, the company said.
Adobe Audition now supports effects channelization, meaning effects can be routed to different input and output channels — an especially useful enhancement for those working with effects in multichannel mixes.
Character Animator finally gets keyframes, which are implemented using essentially the same interface already employed by After Effects and Premiere Pro.
Also new in this version are scene cameras, which allow users to pre-set different views of a scene that can be switched to by activating triggers. The camera can change position or zoom in and out, and the scene can be rotated relative to camera. And one shot can be set as default, meaning the scene will return to that view every time an active trigger is turned off. Audio can be triggered, too, allowing audio to be sync’d with animation triggers.
Another new animation feature is the Motion Lines behavior, which draws lines along an object’s motion path to accentuate fast movement.
Photoshop on iPad
Adobe Max also featured one of the most-anticipated product debuts in Adobe history — Photoshop on the iPad. Spurred along by increased competition from the popular Photoshop-like app Affinity Photo, Adobe confirmed last year that a “full version” of Photoshop was on its way to iPad. But what is it really?
Well, anyone who expected the iPad experience to be more or less like firing up Photoshop on a desktop or laptop computer — or a more iPad-like Microsoft Surface Pro — may be disappointed. For instance, there is no video/animation timeline and GIF exports are not supported. There is no Pen tool, and the experience is definitely geared toward a touchscreen (or Apple Pencil) interface. Moreover, if you poke around enough, you’ll find placeholder alerts in the interface specifying that a given feature is “Not yet supported on this device.”
But what is available is impressive. You get Photoshop standbys including the Horizontal Type tool, Lasso, Healing Brush and Clone Stamp. The familiar workflow for building imagery using layer upon layer of tweakable, blendable components is in effect. And Adobe emphasized that PSD files — the format has been extended to support cloud documents, which will have a “PSDC” extension and will be sync’d among all of a user’s devices — are cross-compatible between desktop systems and the iPad, so if there’s something you can’t do on the iPad, you can make a note to fix it on the desktop.
“Photoshop on the iPad is built using the same code base as Photoshop on the desktop,” explained Photoshop Product Manager Pam Clark in a blog post. “Photoshop on the iPad supports large files and many, many layers, just like Photoshop on your desktop, preserving your data across devices. The edits you make, whether making layer adjustments, masking, or spot healing, will produce the same results across devices because the app is powered by the same desktop engine.”
Clark went on to note that the current version of the app is “the beginning,” with a focus on the most common Photoshop tasks and workflows. Over time, she said, more capabilities will be supported on “a regular cadence of releases.”
The original Photoshop got a number of updates, most notably the Object Select Tool, which uses Sensei technology to identify and isolate objects in an image based on a rough Rectangular Marquee or Lasso selection. Content-Aware Fill has also been upgraded, giving users the ability to restrict the parts of an image that can be sampled to generate the fill when removing an object from the picture. See them both in action in the Adobe-provided video below.
It was somewhat soft-pedaled in the wake of all the Photoshop news, but today also saw the v1.0 release of Substance Alchemist, the materials library and look-development toolset originally shown by Substance maker Allegorithmic at SIGGRAPH 2018 before Adobe’s purchase of the company. At Max, Adobe suggested artists could download 3D assets from Adobe Stock, then customize those blank assets using textures included in Substance Painter, downloaded from the Substance Source service, and potentially customized in Alchemist.
Adobe also said it is working to make brushes in Substance Painter work more like Photoshop brushes, with improved pen-pressure accuracy, new brush settings and blending modes, and support for ABR brushes, a set of which Adobe plans to ship with the next release.
HP and Adobe are collaborating on something called Project Captis, with the intent of developing “an end-to-end product that can transform physical materials into digital 3D materials” using HP workstation technology and Substance Alchemist software — along with HP’s 3D printing technology to bring objects created as 3D models back into the real world. The companies said the solution will be targeted at industries including game development, ecommerce, architecture and fashion. The project is set to run through 2020.
Aero is something completely new — an iOS app designed for building immersive AR experiences. Designers can use a phone or tablet to place objects in space, shape paths, and add triggers for interactivity. A collection of “starter assets” are included with the app, but users can import 2D and 3D imagery from vector graphics, PSD, OBJ, FBX, Collada, glTF, and other formats. The video below featuring Oakland-based graffiti artist Hueman shows some of what’s possible.