1) Theory
                    
                      Objectives:
                    
                    In order to 
                    XR - Extended and augmented reality
                    Final VFX processor for compositing the camera with the set extension and AR
                    Read More
                    , the colors of the video-input must be as close as possible to the colors of the overlay image.
				However, between the screens and the camera, the image passes through several colorimetric profiles.
                    The picture below has been taken before and after applying a color calibration to the extended.
 How it works:
                    
                    
                    
                      How it works:
                    
                    Just like the Geometric Calibration, Smode will need one or several viewpoint of the setup to determine the color correction to be applied to the extended.
                    These points of view are not images but sequences of grids of different colors cast on the screens.  Smode will then compare each color sent in each square with the one received. He will then determine which is the color model of your setup.
 Color model:
                    
                    
                    
                      Color model:
                    the colorimetric profile of your setup
                    
 
                    
                    4) Calibration process
                    In the XR Calibrator, Color tab, enable the Enable locator. 
                    Try to detect the maximum amount of apriltag, especially at the junctions of the walls and the corner, as these are the places where there is the most need to catch up on color.
				Play with the focus of the camera to detect more of them.
 
                    
                    
                    There is also the possibility to change the "Quad Decimate" parameter (In -> Detector: April Tag) to increase the number of tags displayed in the screens.
 
                    
                     
                    
                    
                    You need to lower the Decimate value, that will allow detecting more tag ( but Smode can slow down if your machine is not powerful enough )
                    Take a viewpoint shoot.
                    
                      Remove the locators before each viewpoint shoot to optimize performances.
                     
                    
                    
                    When a ViewPoint is taken, several different colors are sent to the screens at the level of each tag. Smode then records, for each detected tag, each color: the one that is emitted at this place of the screen, and the one that is received.
                    It is then able to deduce the difference between a color sent, at a given place on the screen, compared to the color received at the same place.
				You can visualize the data of a viewpoint by unfolding "viewpoints" at the bottom of the color parameter of the XR-Calibrator:
                    
 
                    
                    Once your viewpoint has been shoot, move the camera to get another point of view, wait for your camera to be stable, and take another one.
				You don't have to look at all the screens for a viewpoint-shoot. Look at the interesting parts of the stage to be calibrated.
                    Try as much as possible to take viewpoints seen from the front of the screens. Feel free to shoot twice on the same position. The colors sent being randomized, this can improve the quality of the measurements.
                    If you have a fixed position camera make one or multiple shoot form it's position
				
				 
                    In some cases, it can be interesting to "merge" the color models of the screens that make up the walls of your setup.
				Select the corresponding 
                    XR Display information
                    Layer that store latency calibration informations and configure and store the color calibration
                    Read More
                     and then in Color Model -> General -> Type -> Switch to Merge.
 
                    
                    A panel warns you that several target parameters will be deleted. Press "YES".
 
                    
                    In 
                    XR Display information
                    Layer that store latency calibration informations and configure and store the color calibration
                    Read More
                    , merge the color model of one screen with the other one.
 
                    
                    
                     You are ready for starting a calibration. Press Calibrate.
				 
                    
                      By default only the color model is calibrated, the inverse color model is a lot more complex to setup and have good result with it ( and it take a lot more time to Calibrate )
                    
                    The calculation generate a collection of LUT stored in the 
                    XR Display information
                    Layer that store latency calibration informations and configure and store the color calibration
                    Read More
                     and can be apply to your setup using a 
                     Smart Lut
                    Modify colors using a LUT from a XR Display information
                    Read More
Smart Lut
                    Modify colors using a LUT from a XR Display information
                    Read More
                    .
 
                    
                    7) Apply the Calibration
                    At this step, you have to Modify the colors of your AR and Extension compositions to make them match the colorimetric profile of your setup
                    You can drag and drop the .smartlut you just exported it will create a 
                     Smart Lut
                    Modify colors using a LUT from a XR Display information
                    Read More
Smart Lut
                    Modify colors using a LUT from a XR Display information
                    Read More
                     that you can apply directly onto the Extended and AR compo in your VFX processor.
 
                    
                    or you can use a 
                    Modifier Stack Layer
                    Deal with a group of 2D Modifier as you deal with Scene
                    Read More
                     so your color control are centralized in your show and you can adjust it by hand
 
                    
                    Once you have imported a 
                     Smart Lut
                    Modify colors using a LUT from a XR Display information
                    Read More
Smart Lut
                    Modify colors using a LUT from a XR Display information
                    Read More
                     you need to select the Display ( for the optional angle data)
 
                    
                    
                      if you want to directly connect the LUT to the display without exporting it, you can create a 
                       Smart Lut
                      Modify colors using a LUT from a XR Display information
                      Read More
Smart Lut
                      Modify colors using a LUT from a XR Display information
                      Read More
                       and change it's mode to "internal Direct" and the select the Lut display
