Dragon Age: The Veilguard info compilation Post 1
Post is under a cut due to length.
There is a lot of information coming out right now about DA:TV from many different sources. This post is just an effort to compile as much as I can in one place, in case that helps anyone. Sources for where the information came from have been included. Where I am linking to a social media user's post, the person is either a dev, a Dragon Age community council member or other person who has had a sneak peek at and played the game. nb, this post is more of a 'info that came out in snippets from articles and social media posts' collection rather than a 'regurgitating the information on the official website or writing out what happened in the trailer/gameplay reveal' post. The post is broken down into headings on various topics. A few points are repeated under multiple headings where relevant. Where I am speculating without a source, I have clearly demarcated this. if you notice any mistakes in this post, please tell me.
as this post hit a kind of character limit, there will probably be at least 1 more post. :)
Character Creation
CC is vast [source] and immensely detailed [source]
We will enter CC straight after Varric's opening narration [source]
You are given 5 categories to work your way through in CC: Lineage, Appearance, Class, Faction, Playstyle. Each of these has a range of subcategories within them. There are 8 subcategories within the "head" subcategory" in "Appearance" alone [source]
Lineage dictates things like race (i.e. human, elf, dwarf, qunari) and backstory [source]
Backstories include things like factions. Factions offer 3 distinct buffs each [source]
There are dozens and dozens of hairstyles [source]
There are separate options for binary and non-binary pronouns and gender [source]
"BioWare's work behind the scenes, meanwhile, goes as deep as not only skin tones but skin undertones, melanin levels, and the way skin reacts differently to light" [source]
CC has a range of lighting options within it so that you can check how the character looks in them [source]
There are a range of full-body customization options such as a triangular slider between body types and individual settings down to everything from shoulder width to glute volume [source]. There are "all the sliders [we] could possibly want". The body morpher option allows us to choose different body sizes [source]
All body options are non-gendered [source]
They/them pronouns are an option [source]
Rook can be played as non-binary [source]
Individual strands of hair were rendered separately and react remarkably to in-game physics [source]
Special, focused attention was paid to ensuring that hairstyles "come across as well-representative, that everyone can see hairstyles that feel authentic to them, even the way they render" [source]
The game uses strand hair technology borrowed in part from the EA Sports games. The hair is "fully-controlled by physics," so it "looks even better in motion than it does here in a standstill" [source]
The ability to import our choices from previous games is fully integrated into CC. This will take the form of tarot cards - "you can go into your past adventures" and this mechanic tells you what the context was and what decision you want to make [source]
In CC we will also be able to customize/remake our Inquisitor [source]
A core tenet of the game is "be who you want to be" [source]
There are presets for all 4 of the game's races (human, elf, dwarf, qunari), in case detailed CCs overwhelm you [source]
Story
The story is set 9 years since Inquisition [source]
The Inquisitor will appear [source]
Other characters refer to the PC as Rook [source]. This article says they are "the Rook" [source]
The ability to import our choices from previous games is fully integrated into CC. This will take the form of tarot cards - "you can go into your past adventures" and this mechanic tells you what the context was and what decision you want to make [source]
The prologue is quite lengthy. A narrated intro from Varric lays the groundwork with some lore and explains about Solas [source]. In this Varric-narrated opening section, the dwarf recaps the events of previous games and explains the motivations of Solas [source] (Fel note/speculation: this sounds like this cinematic that we saw on DA Day 2023)
What happens first off is that Rook, who is working with Varric, is interrogating a bartender about the whereabouts of a contact in Minrathous who can help them stop Solas. The bartender does not play nice and we are presented with our first choice: talk the bartender down or intimidate them aggressively [source]
The first hour of the game is "a luxurious nighttime romp through a crumbling city under a mix of twinkling starlight and lavish midnight blue" (Minrathous) [source]. The game begins with a tavern brawl (depending on dialogue options) and a stroll through Minrathous in search of Neve Gallus, who has a lead on Solas [source]. Minrathous then comes under attack [source] by demons [source] (Fel note/speculation: it sounds like the demo the press played is what we saw in the Gameplay Reveal). Off in the distance is a vibrant, colorful storm where Solas is performing his ritual. [source] Eventually we come upon Harding. [source] and Neve. Rook and co enter a crumbling castle, where ancient elf secrets pop up, "seemingly just for the lore nerds". [source] Then we teleport to Arlathan Forest, have a mini boss fight with a Pride Demon, and there is the climactic confrontation with Solas. After a closing sequence, at this point it is the end of the game's opening mission. [source] (Fel note/speculation: So the Gameplay Reveal showed the game's opening mission)
The action in the story's opening parts starts off quite quick from the sounds of things: the devs wanted to get the player right in to the story. because, “Especially with an RPG where they can be quite lore-heavy, a lot of exposition at the front and remembering proper nouns, it can be very overwhelming.” [source]
BioWare wanted to make the beginning of Dragon Age: The Veilguard feel like the finale of one of their other games [source]
Rook's Faction will be referenced in dialogue [source]
Minrathous is beautiful, with giant statues, floating palaces, orange lantern glow and magical runes which glow green neon. These act "like electricity" as occasional signs above pubs and stores [source]
The story has a lot of darkness tonally. These dark parts of the game contain the biggest spoilers [source]. However, the team really wanted to build in contrast between the dark and light moments in the game, as if everything is dark, nothing really feels dark [source]
Our hub (like the Normandy in ME or Skyhold in DA:I) is a place called The Lighthouse [source] (Fel note/speculation: I guess this screenshot shows the crew in The Lighthouse? ^^)
Each companion has a very complex backstory, their own problems, and deep motivations. These play out through well-fleshed out character arcs and missions that are unique to them but which are ultimately tied into the larger story [source]
We will make consequential decisions for each character, sometimes affecting who they are in heart-wrenching ways and other times joyously [source]
Decisions from previous DA games will be able to be carried over, it will just work a bit differently this time [source]. The game will not read our previous saves. For stuff pertaining to previous games/choices, players will not have to link their accounts [source]
Characters, companions, romance
Varric is a major character [source]
Every companion is romanceable [source]
BioWare tried to make each character's friendship just as meaningful, regardless of romance [source]
If you don't romance a character, they may end up romancing each other [source]
There will be some great cameos [source]. Some previous characters are woven into the game [source]
Companion sidequests/optional content relating to companions is highly curated when it involves their motivations and experiences [source]
We could permanently lose some companions depending on our choices [source]
Our choices can influence if characters get injured and what they think about us [source]
The bonds Rook forges with companions determine how party members grow and what abilities become available [source]
Each companion has a very complex backstory, their own problems, and deep motivations. These play out through well-fleshed out character arcs and missions that are unique to them but which are ultimately tied into the larger story [source]
We will make consequential decisions for each character, sometimes affecting who they are in heart-wrenching ways and other times joyously [source]
Gameplay, presentation, performance etc
Each class (warrior, rogue, mage) has 3 specializations. The ones for Rogue are duelist, saboteur and Veil ranger [source]. (Fel note/speculation: Veil ranger reminds me of Bellara. Maybe this is her 'spec' too?)
Duelist gameplay involves a sharp combination of dashes, parries, leaps, rapid slashes and combos [source]
Faction-related buffs include being able to hold an extra potion or do extra damage against certain enemies [source]
Individual strands of hair were rendered separately and react remarkably to in-game physics [source]
Playstyle settings include custom, distinct difficulty settings for options as granular as parry windows, meaning "players who might fancy that playstyle but typically struggle with the finer points of combat can give it a go" [source]
Combat mechanics is a mix of real-time action and pause and play. Pausing brings up a radial menu split into 3 sections: companions to the left and right, Rook's skills at the bottom, and a targeting system at the top which helps get in focus on certain enemies. [source]. In the pause system you can queue up your whole party's attacks [source]
Tapping or holding the shoulder button pauses the game, allowing us to stop the action and issue orders to companions [source]
There is a system of specific enemy resistances and weaknesses [source]. Weaknesses and resistances plays a big role in combat and abilities are designed to exploit these accordingly [source]. An example is that "one character might be able to plant a weakening debuff on an enemy, and another enemy might be able to detonate them" [source]
There is a vast skill tree of unlockable options [source]
You can set up specific companions with certain kits, e.g. to tackle specific enemy types, to being more of a support, or as flexible all-rounders [source]
Healing magic returns [source]
Abilities can change together with elaborate results, e.g. one companion using a gravity well attack that sucked enemies together, another using a slowing move to keep them in place, and Rook using a big AOE to catch them all at once [source]
A shortcut system lets you map a few abilities to a smaller pinned menu at the bottom of the screen [source]
There are class-specific resource systems. For example, Rogue has "momentum", which builds up as Rook lands consecutive hits [source]
Each class will always have a ranged option [source]
Rogue Rook can do a sort of 'hip fire' option with a bow, letting you pop off arrows from the waist [source]
Warriors can throw their shield at enemies, and can build an entire playstyle around that using the skill tree [source]
There is light platforming gameplay [source]
The game runs on the latest iteration of the Frostbite engine [source]
The game targets 60 fps
On consoles it will feature performance and quality modes so we can choose our preferred visual fidelity [source]
The game is mission based [source]. Some levels that we go to do open up, some with more exploration than others. "Alternate branching paths, mysteries, secrets, optional content you're going to find and solve." [source]
Everything is hand-touched, hand-crafted and highly curated [source]
Some sidequests and optional content is highly curated, especially when it involves the motivations and experiences of the companions. In others we may be investigating for example a missing family, with an entire open bog environment to search for clues and a way to solve the disappearance [source]
Gameplay, presentation, performance etc continued, after the above bullet list hit a character limit
There is sophisticated animation cancelling and branching. Gameplay is action-like, and the design centers around dodging, countering, and using risk-reward charge attacks designed to break enemy armor layers [source]
The dialogue wheel returns [source]. It gives truncated summaries of the dialogue options rather than the full line that the character is going to say [source]
The bonds Rook forges with companions determine how party members grow and what abilities become available [source]
For stuff pertaining to previous games/choices, players will not have to link their accounts [source]
We can play the game fully offline [source]
There are no microtransactions [source]
The game itself is not as cell-shaded in look as the first trailer looked [source]
[☕ found this post or blog interesting or useful? my ko-fi is here if you feel inclined. thank you 🙏]
2K notes
·
View notes
Neural Filters Tutorial for Gifmakers by @antoniosvivaldi
Hi everyone! In light of my blog’s 10th birthday, I’m delighted to reveal my highly anticipated gifmaking tutorial using Neural Filters - a very powerful collection of filters that really broadened my scope in gifmaking over the past 12 months.
Before I get into this tutorial, I want to thank @laurabenanti, @maines , @cobbbvanth, and @cal-kestis for their unconditional support over the course of my journey of investigating the Neural Filters & their valuable inputs on the rendering performance!
In this tutorial, I will outline what the Photoshop Neural Filters do and how I use them in my workflow - multiple examples will be provided for better clarity. Finally, I will talk about some known performance issues with the filters & some feasible workarounds.
Tutorial Structure:
Meet the Neural Filters: What they are and what they do
Why I use Neural Filters? How I use Neural Filters in my giffing workflow
Getting started: The giffing workflow in a nutshell and installing the Neural Filters
Applying Neural Filters onto your gif: Making use of the Neural Filters settings; with multiple examples
Testing your system: recommended if you’re using Neural Filters for the first time
Rendering performance: Common Neural Filters performance issues & workarounds
For quick reference, here are the examples that I will show in this tutorial:
Example 1: Image Enhancement | improving the image quality of gifs prepared from highly compressed video files
Example 2: Facial Enhancement | enhancing an individual's facial features
Example 3: Colour Manipulation | colourising B&W gifs for a colourful gifset
Example 4: Artistic effects | transforming landscapes & adding artistic effects onto your gifs
Example 5: Putting it all together | my usual giffing workflow using Neural Filters
What you need & need to know:
Software: Photoshop 2021 or later (recommended: 2023 or later)*
Hardware: 8GB of RAM; having a supported GPU is highly recommended*
Difficulty: Advanced (requires a lot of patience); knowledge in gifmaking and using video timeline assumed
Key concepts: Smart Layer / Smart Filters
Benchmarking your system: Neural Filters test files**
Supplementary materials: Tutorial Resources / Detailed findings on rendering gifs with Neural Filters + known issues***
*I primarily gif on an M2 Max MacBook Pro that's running Photoshop 2024, but I also have experiences gifmaking on few other Mac models from 2012 ~ 2023.
**Using Neural Filters can be resource intensive, so it’s helpful to run the test files yourself. I’ll outline some known performance issues with Neural Filters and workarounds later in the tutorial.
***This supplementary page contains additional Neural Filters benchmark tests and instructions, as well as more information on the rendering performance (for Apple Silicon-based devices) when subject to heavy Neural Filters gifmaking workflows
Tutorial under the cut. Like / Reblog this post if you find this tutorial helpful. Linking this post as an inspo link will also be greatly appreciated!
1. Meet the Neural Filters!
Neural Filters are powered by Adobe's machine learning engine known as Adobe Sensei. It is a non-destructive method to help streamline workflows that would've been difficult and/or tedious to do manually.
Here are the Neural Filters available in Photoshop 2024:
Skin Smoothing: Removes blemishes on the skin
Smart Portrait: This a cloud-based filter that allows you to change the mood, facial age, hair, etc using the sliders+
Makeup Transfer: Applies the makeup (from a reference image) to the eyes & mouth area of your image
Landscape Mixer: Transforms the landscape of your image (e.g. seasons & time of the day, etc), based on the landscape features of a reference image
Style Transfer: Applies artistic styles e.g. texturings (from a reference image) onto your image
Harmonisation: Applies the colour balance of your image based on the lighting of the background image+
Colour Transfer: Applies the colour scheme (of a reference image) onto your image
Colourise: Adds colours onto a B&W image
Super Zoom: Zoom / crop an image without losing resolution+
Depth Blur: Blurs the background of the image
JPEG Artefacts Removal: Removes artefacts caused by JPEG compression
Photo Restoration: Enhances image quality & facial details
+These three filters aren't used in my giffing workflow. The cloud-based nature of Smart Portrait leads to disjointed looking frames. For Harmonisation, applying this on a gif causes Neural Filter timeout error. Finally, Super Zoom does not currently support output as a Smart Filter
If you're running Photoshop 2021 or earlier version of Photoshop 2022, you will see a smaller selection of Neural Filters:
Things to be aware of:
You can apply up to six Neural Filters at the same time
Filters where you can use your own reference images: Makeup Transfer (portraits only), Landscape Mixer, Style Transfer (not available in Photoshop 2021), and Colour Transfer
Later iterations of Photoshop 2023 & newer: The first three default presets for Landscape Mixer and Colour Transfer are currently broken.
2. Why I use Neural Filters?
Here are my four main Neural Filters use cases in my gifmaking process. In each use case I'll list out the filters that I use:
Enhancing Image Quality:
Common wisdom is to find the highest quality video to gif from for a media release & avoid YouTube whenever possible. However for smaller / niche media (e.g. new & upcoming musical artists), prepping gifs from highly compressed YouTube videos is inevitable.
So how do I get around with this? I have found Neural Filters pretty handy when it comes to both correcting issues from video compression & enhancing details in gifs prepared from these highly compressed video files.
Filters used: JPEG Artefacts Removal / Photo Restoration
Facial Enhancement:
When I prepare gifs from highly compressed videos, something I like to do is to enhance the facial features. This is again useful when I make gifsets from compressed videos & want to fill up my final panel with a close-up shot.
Filters used: Skin Smoothing / Makeup Transfer / Photo Restoration (Facial Enhancement slider)
Colour Manipulation:
Neural Filters is a powerful way to do advanced colour manipulation - whether I want to quickly transform the colour scheme of a gif or transform a B&W clip into something colourful.
Filters used: Colourise / Colour Transfer
Artistic Effects:
This is one of my favourite things to do with Neural Filters! I enjoy using the filters to create artistic effects by feeding textures that I've downloaded as reference images. I also enjoy using these filters to transform the overall the atmosphere of my composite gifs. The gifsets where I've leveraged Neural Filters for artistic effects could be found under this tag on usergif.
Filters used: Landscape Mixer / Style Transfer / Depth Blur
How I use Neural Filters over different stages of my gifmaking workflow:
I want to outline how I use different Neural Filters throughout my gifmaking process. This can be roughly divided into two stages:
Stage I: Enhancement and/or Colourising | Takes place early in my gifmaking process. I process a large amount of component gifs by applying Neural Filters for enhancement purposes and adding some base colourings.++
Stage II: Artistic Effects & more Colour Manipulation | Takes place when I'm assembling my component gifs in the big PSD / PSB composition file that will be my final gif panel.
I will walk through this in more detail later in the tutorial.
++I personally like to keep the size of the component gifs in their original resolution (a mixture of 1080p & 4K), to get best possible results from the Neural Filters and have more flexibility later on in my workflow. I resize & sharpen these gifs after they're placed into my final PSD composition files in Tumblr dimensions.
3. Getting started
The essence is to output Neural Filters as a Smart Filter on the smart object when working with the Video Timeline interface. Your workflow will contain the following steps:
Prepare your gif
In the frame animation interface, set the frame delay to 0.03s and convert your gif to the Video Timeline
In the Video Timeline interface, go to Filter > Neural Filters and output to a Smart Filter
Flatten or render your gif (either approach is fine). To flatten your gif, play the "flatten" action from the gif prep action pack. To render your gif as a .mov file, go to File > Export > Render Video & use the following settings.
Setting up:
o.) To get started, prepare your gifs the usual way - whether you screencap or clip videos. You should see your prepared gif in the frame animation interface as follows:
Note: As mentioned earlier, I keep the gifs in their original resolution right now because working with a larger dimension document allows more flexibility later on in my workflow. I have also found that I get higher quality results working with more pixels. I eventually do my final sharpening & resizing when I fit all of my component gifs to a main PSD composition file (that's of Tumblr dimension).
i.) To use Smart Filters, convert your gif to a Smart Video Layer.
As an aside, I like to work with everything in 0.03s until I finish everything (then correct the frame delay to 0.05s when I upload my panels onto Tumblr).
For convenience, I use my own action pack to first set the frame delay to 0.03s (highlighted in yellow) and then convert to timeline (highlighted in red) to access the Video Timeline interface. To play an action, press the play button highlighted in green.
Once you've converted this gif to a Smart Video Layer, you'll see the Video Timeline interface as follows:
ii.) Select your gif (now as a Smart Layer) and go to Filter > Neural Filters
Installing Neural Filters:
Install the individual Neural Filters that you want to use. If the filter isn't installed, it will show a cloud symbol (highlighted in yellow). If the filter is already installed, it will show a toggle button (highlighted in green)
When you toggle this button, the Neural Filters preview window will look like this (where the toggle button next to the filter that you use turns blue)
4. Using Neural Filters
Once you have installed the Neural Filters that you want to use in your gif, you can toggle on a filter and play around with the sliders until you're satisfied. Here I'll walkthrough multiple concrete examples of how I use Neural Filters in my giffing process.
Example 1: Image enhancement | sample gifset
This is my typical Stage I Neural Filters gifmaking workflow. When giffing older or more niche media releases, my main concern is the video compression that leads to a lot of artefacts in the screencapped / video clipped gifs.
To fix the artefacts from compression, I go to Filter > Neural Filters, and toggle JPEG Artefacts Removal filter. Then I choose the strength of the filter (boxed in green), output this as a Smart Filter (boxed in yellow), and press OK (boxed in red).
Note: The filter has to be fully processed before you could press the OK button!
After applying the Neural Filters, you'll see "Neural Filters" under the Smart Filters property of the smart layer
Flatten / render your gif
Example 2: Facial enhancement | sample gifset
This is my routine use case during my Stage I Neural Filters gifmaking workflow. For musical artists (e.g. Maisie Peters), YouTube is often the only place where I'm able to find some videos to prepare gifs from. However even the highest resolution video available on YouTube is highly compressed.
Go to Filter > Neural Filters and toggle on Photo Restoration. If Photoshop recognises faces in the image, there will be a "Facial Enhancement" slider under the filter settings.
Play around with the Photo Enhancement & Facial Enhancement sliders. You can also expand the "Adjustment" menu make additional adjustments e.g. remove noises and reducing different types of artefacts.
Once you're happy with the results, press OK and then flatten / render your gif.
Example 3: Colour Manipulation | sample gifset
Want to make a colourful gifset but the source video is in B&W? This is where Colourise from Neural Filters comes in handy! This same colourising approach is also very helpful for colouring poor-lit scenes as detailed in this tutorial.
Here's a B&W gif that we want to colourise:
Highly recommended: add some adjustment layers onto the B&W gif to improve the contrast & depth. This will give you higher quality results when you colourise your gif.
Go to Filter > Neural Filters and toggle on Colourise.
Make sure "Auto colour image" is enabled.
Play around with further adjustments e.g. colour balance, until you're satisfied then press OK.
Important: When you colourise a gif, you need to double check that the resulting skin tone is accurate to real life. I personally go to Google Images and search up photoshoots of the individual / character that I'm giffing for quick reference.
Add additional adjustment layers until you're happy with the colouring of the skin tone.
Once you're happy with the additional adjustments, flatten / render your gif. And voila!
Note: For Colour Manipulation, I use Colourise in my Stage I workflow and Colour Transfer in my Stage II workflow to do other types of colour manipulations (e.g. transforming the colour scheme of the component gifs)
Example 4: Artistic Effects | sample gifset
This is where I use Neural Filters for the bulk of my Stage II workflow: the most enjoyable stage in my editing process!
Normally I would be working with my big composition files with multiple component gifs inside it. To begin the fun, drag a component gif (in PSD file) to the main PSD composition file.
Resize this gif in the composition file until you're happy with the placement
Duplicate this gif. Sharpen the bottom layer (highlighted in yellow), and then select the top layer (highlighted in green) & go to Filter > Neural Filters
I like to use Style Transfer and Landscape Mixer to create artistic effects from Neural Filters. In this particular example, I've chosen Landscape Mixer
Select a preset or feed a custom image to the filter (here I chose a texture that I've on my computer)
Play around with the different sliders e.g. time of the day / seasons
Important: uncheck "Harmonise Subject" & "Preserve Subject" - these two settings are known to cause performance issues when you render a multiframe smart object (e.g. for a gif)
Once you're happy with the artistic effect, press OK
To ensure you preserve the actual subject you want to gif (bc Preserve Subject is unchecked), add a layer mask onto the top layer (with Neural Filters) and mask out the facial region. You might need to play around with the Layer Mask Position keyframes or Rotoscope your subject in the process.
After you're happy with the masking, flatten / render this composition file and voila!
Example 5: Putting it all together | sample gifset
Let's recap on the Neural Filters gifmaking workflow and where Stage I and Stage II fit in my gifmaking process:
i. Preparing & enhancing the component gifs
Prepare all component gifs and convert them to smart layers
Stage I: Add base colourings & apply Photo Restoration / JPEG Artefacts Removal to enhance the gif's image quality
Flatten all of these component gifs and convert them back to Smart Video Layers (this process can take a lot of time)
Some of these enhanced gifs will be Rotoscoped so this is done before adding the gifs to the big PSD composition file
ii. Setting up the big PSD composition file
Make a separate PSD composition file (Ctrl / Cmmd + N) that's of Tumblr dimension (e.g. 540px in width)
Drag all of the component gifs used into this PSD composition file
Enable Video Timeline and trim the work area
In the composition file, resize / move the component gifs until you're happy with the placement & sharpen these gifs if you haven't already done so
Duplicate the layers that you want to use Neural Filters on
iii. Working with Neural Filters in the PSD composition file
Stage II: Neural Filters to create artistic effects / more colour manipulations!
Mask the smart layers with Neural Filters to both preserve the subject and avoid colouring issues from the filters
Flatten / render the PSD composition file: the more component gifs in your composition file, the longer the exporting will take. (I prefer to render the composition file into a .mov clip to prevent overriding a file that I've spent effort putting together.)
Note: In some of my layout gifsets (where I've heavily used Neural Filters in Stage II), the rendering time for the panel took more than 20 minutes. This is one of the rare instances where I was maxing out my computer's memory.
Useful things to take note of:
Important: If you're using Neural Filters for Colour Manipulation or Artistic Effects, you need to take a lot of care ensuring that the skin tone of nonwhite characters / individuals is accurately coloured
Use the Facial Enhancement slider from Photo Restoration in moderation, if you max out the slider value you risk oversharpening your gif later on in your gifmaking workflow
You will get higher quality results from Neural Filters by working with larger image dimensions: This gives Neural Filters more pixels to work with. You also get better quality results by feeding higher resolution reference images to the Neural Filters.
Makeup Transfer is more stable when the person / character has minimal motion in your gif
You might get unexpected results from Landscape Mixer if you feed a reference image that don't feature a distinctive landscape. This is not always a bad thing: for instance, I have used this texture as a reference image for Landscape Mixer, to create the shimmery effects as seen in this gifset
5. Testing your system
If this is the first time you're applying Neural Filters directly onto a gif, it will be helpful to test out your system yourself. This will help:
Gauge the expected rendering time that you'll need to wait for your gif to export, given specific Neural Filters that you've used
Identify potential performance issues when you render the gif: this is important and will determine whether you will need to fully playback your gif before flattening / rendering the file.
Understand how your system's resources are being utilised: Inputs from Windows PC users & Mac users alike are welcome!
About the Neural Filters test files:
Contains six distinct files, each using different Neural Filters
Two sizes of test files: one copy in full HD (1080p) and another copy downsized to 540px
One folder containing the flattened / rendered test files
How to use the Neural Filters test files:
What you need:
Photoshop 2022 or newer (recommended: 2023 or later)
Install the following Neural Filters: Landscape Mixer / Style Transfer / Colour Transfer / Colourise / Photo Restoration / Depth Blur
Recommended for some Apple Silicon-based MacBook Pro models: Enable High Power Mode
How to use the test files:
For optimal performance, close all background apps
Open a test file
Flatten the test file into frames (load this action pack & play the “flatten” action)
Take note of the time it takes until you’re directed to the frame animation interface
Compare the rendered frames to the expected results in this folder: check that all of the frames look the same. If they don't, you will need to fully playback the test file in full before flattening the file.†
Re-run the test file without the Neural Filters and take note of how long it takes before you're directed to the frame animation interface
Recommended: Take note of how your system is utilised during the rendering process (more info here for MacOS users)
†This is a performance issue known as flickering that I will discuss in the next section. If you come across this, you'll have to playback a gif where you've used Neural Filters (on the video timeline) in full, prior to flattening / rendering it.
Factors that could affect the rendering performance / time (more info):
The number of frames, dimension, and colour bit depth of your gif
If you use Neural Filters with facial recognition features, the rendering time will be affected by the number of characters / individuals in your gif
Most resource intensive filters (powered by largest machine learning models): Landscape Mixer / Photo Restoration (with Facial Enhancement) / and JPEG Artefacts Removal
Least resource intensive filters (smallest machine learning models): Colour Transfer / Colourise
The number of Neural Filters that you apply at once / The number of component gifs with Neural Filters in your PSD file
Your system: system memory, the GPU, and the architecture of the system's CPU+++
+++ Rendering a gif with Neural Filters demands a lot of system memory & GPU horsepower. Rendering will be faster & more reliable on newer computers, as these systems have CPU & GPU with more modern instruction sets that are geared towards machine learning-based tasks.
Additionally, the unified memory architecture of Apple Silicon M-series chips are found to be quite efficient at processing Neural Filters.
6. Performance issues & workarounds
Common Performance issues:
I will discuss several common issues related to rendering or exporting a multi-frame smart object (e.g. your composite gif) that uses Neural Filters below. This is commonly caused by insufficient system memory and/or the GPU.
Flickering frames: in the flattened / rendered file, Neural Filters aren't applied to some of the frames+-+
Scrambled frames: the frames in the flattened / rendered file isn't in order
Neural Filters exceeded the timeout limit error: this is normally a software related issue
Long export / rendering time: long rendering time is expected in heavy workflows
Laggy Photoshop / system interface: having to wait quite a long time to preview the next frame on the timeline
Issues with Landscape Mixer: Using the filter gives ill-defined defined results (Common in older systems)--
Workarounds:
Workarounds that could reduce unreliable rendering performance & long rendering time:
Close other apps running in the background
Work with smaller colour bit depth (i.e. 8-bit rather than 16-bit)
Downsize your gif before converting to the video timeline-+-
Try to keep the number of frames as low as possible
Avoid stacking multiple Neural Filters at once. Try applying & rendering the filters that you want one by one
Specific workarounds for specific issues:
How to resolve flickering frames: If you come across flickering, you will need to playback your gif on the video timeline in full to find the frames where the filter isn't applied. You will need to select all of the frames to allow Photoshop to reprocess these, before you render your gif.+-+
What to do if you come across Neural Filters timeout error? This is caused by several incompatible Neural Filters e.g. Harmonisation (both the filter itself and as a setting in Landscape Mixer), Scratch Reduction in Photo Restoration, and trying to stack multiple Neural Filters with facial recognition features.
If the timeout error is caused by stacking multiple filters, a feasible workaround is to apply the Neural Filters that you want to use one by one over multiple rendering sessions, rather all of them in one go.
+-+This is a very common issue for Apple Silicon-based Macs. Flickering happens when a gif with Neural Filters is rendered without being previously played back in the timeline.
This issue is likely related to the memory bandwidth & the GPU cores of the chips, because not all Apple Silicon-based Macs exhibit this behaviour (i.e. devices equipped with Max / Ultra M-series chips are mostly unaffected).
-- As mentioned in the supplementary page, Landscape Mixer requires a lot of GPU horsepower to be fully rendered. For older systems (pre-2017 builds), there are no workarounds other than to avoid using this filter.
-+- For smaller dimensions, the size of the machine learning models powering the filters play an outsized role in the rendering time (i.e. marginal reduction in rendering time when downsizing 1080p file to Tumblr dimensions). If you use filters powered by larger models e.g. Landscape Mixer and Photo Restoration, you will need to be very patient when exporting your gif.
7. More useful resources on using Neural Filters
Creating animations with Neural Filters effects | Max Novak
Using Neural Filters to colour correct by @edteachs
I hope this is helpful! If you have any questions or need any help related to the tutorial, feel free to send me an ask 💖
456 notes
·
View notes