Posters

Fundamental Brightness and Lightness Scales
Saeedeh Abasi, RIT
Mark D Fairchild 

Modeling color appearance based on LMS cone responses is important as it helps to account for individual cone responses and individual differences. To have a system of colorimetry based on cone fundamentals at least five color attributes should be modeled. These color attributes can be modeled one dimension at a time rather than incorporating all color perceptions into one multidimensional space. So, proposing fundamental, one-dimensional scales of hue, lightness, brightness, and saturation/chroma can be very useful for a variety of colorimetric applications and for better understanding the principles of human color perception. Brightness and lightness are important fundamental attributes in color appearance models. A model for brightness and lightness prediction was proposed, which was built directly from cone fundamentals, and it is a power function of achromatic response. This model is a physiologically plausible model and can predict brightness and lightness for different surrounding conditions. The H-K effect is included in the model using fluorence and gray content concepts. The luminance of stimuli in the fluorence state, G0, can be considered as the threshold luminance for each wavelength. G0 defines the luminance of the 'equal chromatic brightness' and can be determined for the entire chromaticity diagram. The G0 luminance can be considered as an anchor for a specific chromaticity. So, the luminance of the target stimulus can be normalized to the G0 luminance, instead of normalizing to a white luminance. Using this method, the H-K effect can be predicted with good accuracy. The Stevens effect is also included in the model using the terminal brightness introduced by Stevens and Stevens. The performance of the model was compared with the CIECAM16 and CAM16-Hellwig using the available data sets, and it shows good predictions on brightness and lightness under different viewing conditions.

Saeedeh Abasi is a PhD candidate at the Munsell Color Science Laboratory at Rochester Institute of Technology. She received her B.S., M.S. and PhD degrees in Textile Engineering from Amirkabir University of Technology. Her research interests are color science and image processing. 

Preserving Perceptual Brightness: A G0-Referenced Lightness Metric for Enhanced Color-to-Grayscale Conversion
Sanaz Aghamohammadi Kalkhoran, Rochester Institute of Technology
Mark Fairchild, Rochester Institute of Technology
Susan Farnand, Rochester Institute of Technology

Most images encountered in daily life are color images, yet grayscale representations remain indispensable in a wide range of applications due to their reduced data requirements, simplified operations, and efficient compatibility with downstream image processing and computer vision tasks. Grayscale images emphasize luminance and contrast, highlight critical visual details, and facilitate storage and transmission without the complexity of full-color data. However, conventional color-to-grayscale conversion methods often struggle to preserve the nuanced perceptual brightness relationships that emerge from highly saturated colors, potentially compromising naturalness and contrast. This highlights the importance of developing improved conversion algorithms that not only maintain perceptual fidelity and image detail, but also preserve task-relevant information in the resulting grayscale images. In this paper, we present a novel color-to-gray algorithm that incorporates a newly developed G0-referenced lightness metric (L*G0) to more faithfully represent perceived brightness. Unlike Lightness methods that normalize luminance relative to a fixed diffuse white point, our approach defines a chromaticity-dependent luminance reference (YG0) using optimal color boundaries, thereby capturing the transition at which colors lose their reflective quality and appear self-luminous. We demonstrate that grayscale images generated by the L*G0-based method maintain greater perceptual fidelity, as subtle variations in saturation and hue are more accurately represented. Comparative analyses with established color-to-gray conversion methods indicate that our approach not only preserves critical spatial details and contrast, but also more closely aligns with human perceptual judgments, particularly in regions of high chroma. By integrating a more perceptually relevant measure of brightness into the grayscale conversion process, this work sets a new standard for perceptual accuracy and offers a valuable tool for researchers and practitioners in color imaging, display technologies, and computational photography.

Sanaz Aghamohammadi earned her BSc in Textile Engineering from Amirkabir University of Technology in 2019 and went on to receive her MS in Polymer and Color Engineering in 2022. She is currently pursuing a PhD in Color Science at Rochester Institute of Technology's Munsell Color Science Laboratory. Her research interests focus on color perception, with a particular emphasis on chromatic brightness and brilliance.


Character Design With Colours

Sameena Anis

I will teach about creating characters and assigning colours! Prompts include creating characters from a story or emotion from your life, and also looking at other examples of creative use of colors in characters.

Sam Anis is a curiosity-driven artist and loves to explore every medium with playfulness. They make colourful, surreal stories about their identity as a brown, queer, and disabled artist. Sam loves making art with comics, illustration, fabric, sculpture, ceramics, writing, and much more. They are very into characters and indirect storytelling through metaphors. 

Color Association Correspondences Between Emojis and Emotions
Gwynneth Buckton, Rochester Institute of Technology
Tina Sutton, Rochester Institute of Technology
Christopher Thorstenson, Rochester Institute of Technology

Emojis are frequently used to communicate about abstract emotional concepts via digital icons that mimic or embellish emotion expressions. Because emotion concepts are abstract (i.e., involving intangible thoughts and feelings), communicating about emotion using visual media requires translating concrete (i.e., directly perceivable) visual cues that convey emotional meaning. Emojis largely accomplish this by translating concrete facial expressions (e.g., smiles, frowns) to convey abstract emotion concepts (e.g., happy, sad). However, emojis often incorporate color cues to emphasize emotion concepts as well. For example, angry emojis are often colored red, while disgusted emojis are often colored green. This inclusion of color as a concrete visual cue to emotion likely suggests that color can meaningfully facilitate emotion communication via visual media. The current work seeks to better understand the correspondences between emojis, their perceived emotion concepts, and their color associations. In other words, we seek to address the extent to which people associate emojis with colors as a function of the emotion concepts that they represent. In the current experiment, participants describe (i.e., label) the emotion concept that is being conveyed for 22 different emojis. Then, participants provide color associations separately for each emoji, and for each corresponding label. The results indicate several instances of high correspondence between emoji-color and label-color associations (e.g., both angry emojis and the concept "angry" are highly associated with red). However, there are also several instances where these associations do not correspond well (e.g., colors associated with a nervous emoji do not correspond well to colors associated with that emoji's label). We will include discussion about factors that likely predict these differences, as well implications for perception, cognition, and practical application.

Gwynneth Buckton I completed my associate's in applied arts and sciences at Bellevue Community College in 2022 and will be a recent graduate of Rochester Institute of Technology at the time of the conference. I will be earning my bachelor's degree through the School of Individualized Study, curating a degree that allows me to study biological methods of cognition using a multi-disciplinary approach. I found my way to the Cognitive Psychology lab while working on the Psychology minor program and have since participated as a lab assistant for both Dr. Altobelli and Dr. Sutton. Outside of school and the lab, I enjoy drawing, playing a variety of instruments, arranging music, martial arts, and reading a wide variety of books. Prior to participating in Dr. Sutton's project about emojis, color, and emotion, I had not put much thought into the color palettes of emoji creation, let alone how that interacts with the emotion being portrayed. Now, I find myself thinking much more about the different ways that individuals view the same symbolism and how the design of the emoji, down to the colors, can make it both easier and more difficult to convey specific emotions.

Compensating for Color Deficits: Perceptual and Neural Adjustments in Anomalous Trichromacy
Fatemeh Charkhtab Basim, University of Nevada, Reno (UNR) 
Arsiak Ishaq, University of New South Wales
Erin Goddard, University of New South Wales

Color helps communicate important visual information. In anomalous trichromacy (AT), reduced color signals challenge this process. AT occurs when the medium (M) and long (L) wavelength cone photopigments have closer spectral peaks, reducing the LvsM comparison signal. This can impair color perception, but most AT individuals may perceive colors much more similarly to color-normal (CN) observers than expected. This can suggest perceptual or post-perceptual compensation in AT individuals. We tested this through two experiments: color naming and contrast adaptation. In the naming experiment, CN and AT observers had to label colored squares with varying chromaticities displayed on different background luminances. CNs showed a constrained achromatic region with a blue-yellow bias, while ATs displayed larger, more variable achromatic regions. ATs labeled stimuli as achromatic less often than expected, suggesting perceptual compensation. In the adaptation experiment, we measured LvsM contrast thresholds before and after adapting to LvsM contrast modulations (1 Hz for 120 sec). Adaptation reduces sensitivity by increasing contrast detection thresholds, with larger increases occurring at higher adapting contrast levels. ATs have lower sensitivity, so if there was no compensation, normally they were expected to show weaker adaptation. However, deutan ATs adapted more than predicted, showing partial perceptual compensation through neural gain adjustments. Protan ATs showed minimal adaptation, except for one outlier who adapted strongly. These findings suggest that some ATs experience neural compensation in early visual processing, which can help them balance their reduced chromatic sensitivity.

Fatemeh Charkhtab Basim I'm a third year PhD student in the Integrative Neuroscience program at the University of Nevada, Reno, with a background in polymer engineering and color science. My research focuses on color vision and perception, and I'm particularly interested in how color-deficient individuals perceive different colors and handle achromatic settings, and how they actually compensate for their lack of color vision. Before switching to neuroscience, I studied polymer synthesis, paint structures, and human color vision theories, which gave me a unique lens for my current work. In my projects, I use psychophysical experiments and tools like Psychtoolbox in MATLAB to explore how visual information is processed. Outside the lab, I'm involved with the Iranian Students Association, where I help foster cultural connections and support students. I work as an instructor during breaks, and I also love hiking, photography, and spending time with my friends. I'm excited to share my work at the ISCC conference and learn from other researchers passionate about color perception. 

2HDRVD: The Handheld High Dynamic Range Video Dataset
Trevor Canham, York University, Toronto
Michael Murdoch, RIT
Andrew Sevigny

The lightweight form factor of new cinema cameras and recording hardware has allowed for high accuracy real-time photometric measurement of diverse scenes at low cost. Using this equipment, a total of 783 High dynamic range, resolution and frame rate RAW videos were captured in natural (Utah, Arizona, Washington and Alaska USA), urban (London UK, Seattle & New York City, USA) and indoor scenes featuring diverse high contrast motion content (ecological, human, animal, mechanical, etc.) and lighting conditions (natural & artificial, day & night). The dataset, its associated capture and processing workflow, and technical applications will be presented.

Trevor Canham is studying color imaging under the supervision of Michael Brown at York University in Toronto. He received the BSc in Motion Picture Science from the Rochester Institute of Technology, and spent several years working in Marcelo Bertalm o's Image Processing for Enhanced Cinematography lab in Barcelona, Spain. His interests lie in the interaction between color phenomenology and imaging systems. He was recently awarded best student paper at the 31st Color & Imaging Conference and the Color Research Society of Canada's graduate student award.

Light and Color in the Work of Cruz-Diez: Phenomenology and the Creation of Atmospheres
Camila Consani, University of São Paulo - USP

João Carlos Cesar, University of São Paulo - USP

The Venezuelan artist Carlos Cruz-Diez (1923-2019) pioneered the disconnection of color from its physical support, projecting it into space through light. His work, grounded in the subtractive, additive, and reflective dimensions of color, spans eight research areas that investigate dynamic optical interactions, resulting in immersive and mutable visual experiences. Since 1959, Cruz-Diez developed techniques that produce unstable hues, visible only through the interaction between light, movement, and the viewer's gaze. This phenomenological approach transformed color into a continuously evolving phenomenon, merging the boundaries between art and architecture. His public interventions, found in avenues, airports, and buildings, use chromaticism as a central element, creating atmospheres that seamlessly integrate art and architectural space in a fluid and innovative way. The concept of atmosphere, deeply tied to phenomenology, plays a crucial role in enriching the experience of architectural spaces. The interaction between light, color, and texture creates environments that transcend materiality, transforming spaces into fluid and emotionally engaging experiences. The atmospheres created by Cruz-Diez not only evoke memories and emotions but also influence behavior, highlight architectural features, and soften visual boundaries, offering a richer sensory and cognitive experience. Cruz-Diez's work exemplifies the fusion of art and architecture, transcending conventional perceptions to create dynamic, transformative spaces. Through the interaction of light, color, and movement, his creations redefine the relationship between the individual and their environment, turning the aesthetic experience into a profound, innovative, and emotional encounter.

Color Reproduction on Metal Surfaces: Best Practices and Technical Insights
Sandra Dedijer, University of Novi Sad, Faculty of Technical Sciences, Department of Graphic Engineering and Design
Ma Mati, Ball Packaging
Marko Milanovi, Ball Packaging 

The printing on aluminum substrates (primarily cans) for larger print runs is typically executed using offset printing, most often with spot colors, while digital printing is employed for smaller runs and proof prints. Due to the technical differences between these two methods, it is crucial to ensure consistency in color reproduction to avoid significant visual discrepancies between proof and final prints. Additionally, the global nature of production, where identical designs are produced at various geographical locations, necessitates the standardization of prints, machine operations and the alignment of printing substrates. To address these challenges the Pantone Live system was introduced providing uniformity in color reproduction across various materials including paper, plastic, textile and metal. This system, supported by digital color libraries comprising 42 databases, significantly improves communication between brands and manufacturers ensuring consistency in color reproduction. In the metal deco industry, the system is applied to the design and printing of aluminum cans overcoming challenges posed by the specific properties of metallic substrates. Pantone Live represents a significant innovation in color standardization ensuring consistent color reproduction across various substrates and throughout all stages of the production process. Its application in the metal deco industry has demonstrated outstanding results in aligning colors, reducing variations and achieving high-quality printed products. We are aiming to present the application of digital color libraries and the accuracy of color reproduction on metal substrates through visual and spectral evaluations. The analysis included device calibration, sample preparation and printing utilizing standards for calculating color differences ( 蕸00) and the iColor Control - iQC software tool. Measurement results will be presented in the context of deviations from standard values, including opaque and transparent colors, with and without specular reflection measurements. Additionally, a component for evaluating reflection and gloss (SRR_GLOSS) was introduced.

Sandra Dedijer is employed as a full professor at the Department of Graphic Engineering and Design, Faculty of Technical Sciences in Novi Sad. She received her Doctoral degree after defending the dissertation titled "Development of a model of process analysis of making exo printing forms". She has published over 100 scientific and professional papers, 16 of which are in journals with an impact factor. She is the co-author of two textbooks, four practice textbooks and book chapter by international publisher. She is the coordinator of one CEEPUS project She is the head of Undergraduate Academic Studies at the Department of Graphic Engineering and Design and serves as the head of the Chair of Graphic Engineering and Design. 

Universal Color Design for Thematic Maps
William Fischer, I-See-U

Data Presentation via thematic maps routinely create barriers to their information for persons with vision deficiencies. And the primary culprit is poor color design. The poster will include a case study for a Center for Disease Control map illustrating degrees of respiratory illness by U.S. states. The included pdf file in this submission contains images that illustrate the problems with, and solutions for, the map's color design. These images and more will be used in the poster. _x000B__x000B_ ? White backgrounds cause pupil constriction, creating problems for persons with low vision ... Solution: easily solved with a dark background. ? High contrast / high saturation can cause issues for persons with light and scotopic sensitivity (including persons with intermittent migraine) ... Solution: color can be limited to the less saturated CMYK color space, and contrast maxed out at a 12:1 ratio. ? Overly subtle differentiation between color-data segmentations is hard to parse in high light and glare situations on our phones and in low room lighting with printed versions, especially for persons with most types of vision deficiency ... Solution: increase the text and outline color values to meet the WCAG minimum 4.5:1 contrast requirement. Colors don't differentiate well with data segments (high, moderate, etc.) ... Solution: add thick lines between regions, reorganize the color hues, and increase the color value range (light to dark). ? Poor color differentiation for color-blind persons ... Solution: add correlation numbers and utilize colors that translate well for red-green color-blindness (99.999% of the cases). For rare blue-colorblind and no-color-vision persons, my proposed scheme has improved efficacy over the original, and it also numbers as a fallback.

Bill Fischer is professor emeritus and founder of the Digital Art & Design program at Kendall College of Art and Design of Ferris State University. He is the author of the I-See-U blueprint for Inclusive, Socio-Emotional, Entertaining, and Universal design. He was the executive producer for The EPIC Project (Engaging Production Inspiring Classrooms) an ongoing collaboration with faculty, K12 educators, and field experts that build and test digital media products focused on inclusion and innovation. He's designed toys, buildings, automotive interiors, animated, printed, and interactive media for over 30 years. Bill is a multiple award-winning designer in the automotive, digital media, and games arenas. He led teams that earned Ford's best new product, three Motor Trend cars of the year, best in show in the American Advertising awards, and has earned seven patents. He supported teams that won best games at the Serious Play, and Meaningful Play game conferences. Most recently Bill has led teams that create board and digital games, animation, video, apps, AR/VR, and mixed reality media that utilize his universal design methods and tools to facilitate rich experiences for persons with disabilities and provide full participation in the ongoing socio-cultural fabric of the world we all share.

Spherical Color as an Alternative to Cubic Models  
James Garrard

Conventional color models used in display technology, such as the various cubic models, are based on the notion that component colors add without impacting the luminance of the hues. The spherical color system I have developed is much better at adjusting for these perceptual issues while also maintaining ease of use in display technologies and other applications. The spherical model of color is based on the idea that color exists as a scalar value in three dimensions, those being the primaries, red, green, and blue, such that the wavelengths are equidistant and sufficiently cover the visible spectrum when these components are composed together. This RGB space is in the first octant, and combined with the axiom that luminance is the distance away from the origin, forms a spherical section in the same way that the cubic model is also restricted to positive values. Therefore luminance forms spherical triangle shells and chroma and hue can be defined as distance away from the axis planes and the distance around the perimeter of said triangle respectively. The benefits of this spherical color model is that all color transformations are just vector transformations meaning you get gradients for free. Additionally, the HCL color model is very intuitive for humans to use and manipulate. Another benefit is that the HWB model is identical, but with the chroma and luminance inverted to white and black which avoids several of the issues with cubic HWB that causes impossible states. The spherical model was originally designed to solve the secondary brightening in the cubic model, but it is also adaptable to adjusting for perceptual brightness in the primaries by scaling the primaries in relation to the chroma so that the white point is still maintained while colors like green are darkened to set point.

James Garrard is an Automation Engineer with a passion for color theory. In his spare time, he explores the science and perception of color, focusing on innovative ways to represent and interact with it. He has developed a spherical color model, a quaternary naming system, and visualization software to showcase his ideas. 

Color Perception in Deaf and Hard of Hearing Individuals and Implications for Design
Miranda Garvey, RIT 
Elena Fedorovskaya, RIT

While research indicates that color perception itself is generally not significantly different between hard of hearing and hearing individuals, some studies suggest that the way deaf or hard of hearing people associate meaning with colors, and their ability to discriminate between subtle color variations, might be impacted due to their reliance on visual information and the lack of auditory input, potentially leading to slightly different color perception experiences depending on the individual and their specific hearing loss level. These potential differences in color perception and responses to color information need to be taken into consideration when designing interactive experiences in immersive environments. In this paper we will describe existing findings related to color and multisensory perception in deaf and hard of hearing people and outline the implications for user experience design that involve color.

Miranda Garvey is a 3rd-year Neuroscience Major with a Psychology Minor at Rochester Institute of Technology. I am also on a pre-med track but interested in getting my MD/Ph.D. I am a part of URISE and an NIH-funded researcher at RIT. My current research includes investigating differences in color and multi-sensory perception in hearing and deaf or hard-of-hearing individuals using EEG technology to measure ERP amplitudes. My research interests are predominantly within neuroscience, specifically neuronal plasticity in different sensory modalities within individuals who have lost or inhibited perception in certain regions. My primary inquiry is how plasticity in the brain is affected in latent deaf and hard-of-hearing individuals compared to hearing individuals. However, I am interested in acoustic neuromas and autoimmune diseases that affect hearing loss. As a latent deaf individual, I want to help create treatment plans and measures that can one day restore hearing, among other modalities, for individuals who have suffered sensory losses, as I have. We must know how our sensory regions interact to understand better how we can lose sense. That interaction holds the answers we are searching for. 

Systems for Improved Color Optics and Painting
Allyson Glenn, University of Saskatchewan

Like Joseph Albers or Albert Henry Munsell's color notations, my approach to color as an artist and educator is both theoretical and experiential. Centered on artist paint color combinations, my creative research has led to the development of systems that simplify and improve the conditions for painting. These strategies can enhance efficiency in paint color mixing, increase production, and limit expenditure on materials. In this presentation I will describe and explore the following techniques: 1. A color selection approach to combining or layering colors; one that allows for versatility, luminosity, and spatial depth in paint applications. 2. A color reference system to gain consistent outcomes when color mixing and layering. 3. A simple "do-it-yourself" paint preservation method that can significantly extend the working time of paint (e.g., up to a year for acrylic paint). To observe the many variations of color combinations within a singular "universal" palette of primaries and secondaries, I gathered hundreds of color recipes for mixing and layering, from which I developed a color reference system. To create efficiencies in my production, I use the reference system to pre-mix color for large-scale works and created a new method for preserving paint. I introduced simple variations of these techniques to my students at the University of Saskatchewan, and with the ability to preserve pre-mixed colors, which stayed wet for weeks (including water-based), students were more successful at wet-into-wet or impasto paint applications. Upon introducing the third innovation - a paint preservation system to students, I observed improvement in their paintings - color consistency and mark making. When put into practice together, the above strategies can lead to improved color choices, consistent color mixing/layering, and less wastage. By sharing these studio-tested methods, I hope to improve the overall conditions and experience in painting for seasoned painters, students, and beginners alike.

Allyson Glenn is a visual artist and art educator. She explores color through a variety of art mediums including painting, drawing, and animation. Her artwork has been widely shown across North America and Europe, India, China, and Greece. Her investigations into color led her to create systems that improve the painting process, spatial depth and vibrancy of the pictorial image. As an art educator, she often shares these explorations with students. Allyson is an Associate Professor for the School for the Arts, University of Saskatchewan, Canada.

Developing a Custom Color Calibration Target for Objective Skin Color Measurement Using Color-Corrected Dermoscopy
Maysoon Harunani, Washington University in St. Louis
Orlee Sadinoff
Anmol Jarang

Objective skin color assessment through colorimetry can measure skin disease progression and ensure diverse population recruitment in trials. Colorimetry measurements are provided in L*a*b* color space. The L* b* plane is useful because the Individual Typology Angle (ITA), derived as the angle from the point (L*=50, b*=0) and the measured (L*,b*) values, corresponds to melanin content. Colorimeters can be costly or difficult to use on small or curved anatomical sites. Photography provides a more accessible colorimetry tool but has been limited in maintaining color consistency due to variations in camera calibration and non-uniform illumination. A dermatoscope, a cross-polarized, 10x magnification epiluminescence imaging device commonly used in clinics, was repurposed to provide objective colorimetry measurements when combined with a color calibration target. For each image, a color correction matrix minimized the color difference between the known and measured calibration targets. This matrix was applied to the image to extract a skin color measurement. While calibrating dermoscopy images with commercial targets displayed the ability to estimate ITA, these targets are rectangular, occupying most of the dermatoscope's field of view (FOV). Additionally, most of the colors in the target are not skin colors, making the extracted L*a*b* values less accurate for very light or dark skin tones, causing greater errors in the estimated ITA. To improve color calibration for skin color while integrating colorimetry into dermatoscopes, a 21 mm diameter custom calibration target with 24 patches (each 1.6x1.6mm), was designed and 3D printed to fit the periphery of the dermatoscope's FOV. The target consists of 6 neutral grey patches and 18 optimally selected patches from the Pantone SkinTone Guide with known L*a*b* coordinates calculated from existing skin reflectance spectra. The development of this calibration target suggests that color-corrected dermoscopy can be integrated into a clinical setting for objective skin color assessment.

Maysoon Harunani is a PhD candidate at the Washington University in St. Louis, studying Biomedical Engineering. Her research in the Shmuylovich Lab in the Division of Dermatology focuses on the quantification and characterization of skin color. Her work hopes to address healthcare disparities caused by skin color, aligning with her role as a fellow for the Washington University Center for the Study of Race, Ethnicity, and Equity. She also serves as an assistant to the instructor for the BME Senior Design course, mentoring students with professional writing, technical communication, and project management. She holds a BS in Biomedical Engineering with a concentration in Medical Optics and a minor in Optics from the University of Rochester. During her undergraduate years, she was recognized as a Grand Challenges Scholar by the National Academy of Engineers for her work towards the Engineering Better Medicines.

The World Was Black and White and We Were in Screaming Color: Color Terminology and Association with Taylor Swift's Music 
Sofie Herbeck, RIT
Leah Humenuck, RIT

Taylor Swift is known to use elaborate color terminology in her lyrics, such as "but loving him was red" or "it 's blue, the feeling I've got". Thus, it would not be surprising if her listeners formed a rich network of color associations with her music.The current research investigates this possibility by examining the frequency and types of color terms used throughout her discography (2006-2024), and comparing these with fans' color associations with her music. The English language has 11 basic color terms: red, orange, yellow, green, blue, purple, pink, brown, gray, white, and black (Kay & Regier, 2003). Their uses were quantified across all lyrics, along with modified color terms (a basic color term with an adjective, e.g. "dark gray") and non-basic color terms (e.g. "golden" or "maroon"). This allowed the development of linguistic color palettes per song and per album, charting the progression of Swift's use of color terminology over different eras. Fans were surveyed on color associations with Swift's music, starting with which of the basic 11 color terms they thought Swift uses the most. The survey included album titles and album covers (without title text) to assess color associations with albums' linguistic versus visual components. For each album, participants were asked to pick the song they most associated with a color. We compared the alignment between questionnaire results and lyric data analysis to better understand listeners' perceived color association with Swift's music. Future work may consider these results alongside the use of color in other visual expressions of Swift's music, music videos, or tour production elements. This could provide further insight on mechanisms that promote fan color associations with the artist and explain how subcultural experiences inform color-concept associations in cognition more broadly.

Sofie Herbeck is a Color Science PhD student in the Munsell Color Science Laboratory (MCSL), at the Rochester Institute of Technology (RIT). Sofie received a B.A. in Computer Science & Theatre and Performance Studies at the University of California, Berkeley in 2021. During and after their undergraduate degree, they worked with Profs. Ren Ng and Austin Roorda as a research assistant on a collaborative project between computer science and vision science to probe human color vision at the photoreceptor level, using adaptive optics. At RIT, advised by Profs. Michael Murdoch and Christopher Thorstenson, Sofie conducted a project on transparency adjustment and perception of faces in optical see-through augmented reality. Currently, when not thinking about Taylor Swift's lyrical use of color terminology, Sofie is working with the MCSL's trichromator to compare and assess various methods of measuring individual color matching functions in humans.

Leah Humenuck is a PhD candidate in Color Science at the Munsell Color Science Laboratory at Rochester Institute of Technology. Leah is enchanted to research imaging, reproduction, and lighting for cultural heritage. Ultimately, she takes the proverbial Polaroid and turns the world from black and white (inaccurate) into screaming (accurate) color. She is also a book and paper conservator repairing any crumpled-up piece of paper and removing the dust on every page. This background in conservation she'll be using for the rest of her life to inform her color science research. Her goal is that whatever item or color topic she is researching, it doesn 蘗t go out of style and can still make the whole place shimmer. Both her roles as color scientist and conservator are to ensure the collections long live at cultural heritage institutions, and like a folk song they will be passed on. Leah had the time of her life obtaining a BS in Chemistry from Sweet Briar College and an MA with honors in Conservation from West Dean College of Arts and Conservation. 

Creating Colored "Bound Volumes" with Digital Object Identifiers
Daniel Martin, University of Wisconsin-Parkside

Abstract: This proposal describes the conceptual basis and production of bound volumes of academic journals using Digital Object Identifiers (DOIs) as the source for hexadecimal colors. Those colors are combined with other colors representing all published material from a selection of journals over multiple volumes, years, and sometimes tens of thousands of articles. The resulting images form a kind of "bookshelf" or timeline of the publication, showing the frequency of publication, different types of DOI syntax, and shifts in publishers or publishing technologies. The images are also intriguing abstractions in their own right - patterns appear and fade, some images resemble landscapes while others include explosive or unpredictable spurts of color that are idiosyncratic. The addition of analytic text to some images helps provide contextual clues to the image's origins without explicitly revealing them. These images, while largely an aesthetic exploration, also hint at possibilities for representing aspects of digital and academic publishing: the frequency of publication in a specific outlet; the ebbs and flows of output across disciplines; the (often uncompensated) labor of academics, editors, and reviewers; ways to track citations and publications in a visual manner rather than with numbers; and new ways of visualizing data in an overwhelmingly online world. Future iterations of these static images could involve API-driven data that is visualized in real time, leading to an evolving generative work of abstract art with representational underpinnings.

Daniel (Dan) Martin is an assistant professor of graphic design at the University of Wisconsin-Parkside, where he teaches web design, typography, and design foundations. Originally from the Upper Peninsula of Michigan, Dan received his MFA in Design from the University of Minnesota. Prior to graduate school and academic life, Dan worked at the University of Chicago Press for more than a decade. When not teaching his students the ins and outs of HTML, CSS, JavaScript, serifs, sans-serifs, and more, he works with select cultural organizations as a freelance graphic designer as well as maintaining his personal visual art practice. That practice explores issues of technology, publishing, and labor through data-based abstraction. Dan lives in Kenosha, WI, with his wife, Rachel Rolland and dog, String Bean. 

Color Constancy in Virtual Environments with Head-Mounted and Flat-Panel Displays
Andrea Avendano Martinez, RIT
Christopher Thorstenson, RIT
Michael Murdoch, RIT 

Virtual reality (VR) is often used as a tool for enhancing presence and interaction within an immersive experience. One of the goals of VR is to approximate the perception of the real world as closely as possible. In real life, variations in lighting conditions can significantly influence how colors are perceived, both over time and in different environments. However, the human visual system automatically compensates for these changes so that constant color perception is largely maintained. This process is called 'chromatic adaptation', whereby we tend to discount the color of the illuminant so that objects' colors appear consistent across variable lighting conditions. Color constancy is usually measured with an index, which measures how well our perception of an object's color remains consistent regardless of different lighting conditions. A higher color constancy index indicates that an object's color appears more consistent across varying illuminations, while a lower index suggests greater variability in perceived color. Little research has been done to investigate how color constancy manifests in VR, in which we view a virtual environment through an emissive screen. This study aims to understand how changing illumination conditions in VR environments affects an object's color appearance. This is investigated through a psychophysical color constancy experiment in a virtual environment with a head-mounted display (HMD) and a conventional flat-panel display. We determine the color constancy index of observers under various illuminations. The findings of this research will provide insight on the capability of VR to reproduce color constancy to a similar degree as observed in real life. This will enhance our understanding into how color perception functions in virtual environments and may contribute to developing more immersive VR experiences.

Andrea Avendano Martinez is a Masters student in Color Science at the Rochester Institute of Technology (RIT) in Rochester, NY. She holds a B.S. degree in Motion Picture Science from RIT, where her senior capstone focused on correcting camera metamerism errors in LED walls for virtual production. Andrea's research interests encompass color appearance phenomena and color perception in virtual and augmented reality (VR/AR). In 2022, she participated in the Academy GOLD Rising program at the Academy of Motion Picture Arts and Sciences in Los Angeles, CA. In 2022-2023, she completed two internships at Dolby Laboratories in Sunnyvale, CA, as a Dolby Vision Content Intern and a Vision and Color Science Intern. Second Author: Christopher Thorstenson Second Author Organization: RIT Second Author Bio: Second Author: Michael Murdoch Second Author Organization: Second Author Bio:  is a Masters student in Color Science at the Rochester Institute of Technology (RIT) in Rochester, NY. She holds a B.S. degree in Motion Picture Science from RIT, where her senior capstone focused on correcting camera metamerism errors in LED walls for virtual production. Andrea's research interests encompass color appearance phenomena and color perception in virtual and augmented reality (VR/AR). In 2022, she participated in the Academy GOLD Rising program at the Academy of Motion Picture Arts and Sciences in Los Angeles, CA. In 2022-2023, she completed two internships at Dolby Laboratories in Sunnyvale, CA, as a Dolby Vision Content Intern and a Vision and Color Science Intern. 

Visual Perception and Contrast Sensitivity: Evaluating the Effects of Tinted Lenses
Likhitha Nagahanumaiah
Susan Farnand
Christopher Thorstenson

This study explores the influence of eyewear lenses on human visual perception, focusing on spatial-chromatic contrast sensitivity for color pairs like cyan-red and magenta-green. It examines whether eyewear enhances contrast perception and evaluates participants' accuracy and response times in identifying spatial-chromatic contrast patterns. The objectives are to: 1. Determine cut-off spatial frequencies for different eyewear types. 2. Identify individual contrast sensitivity thresholds. 3. Assess accuracy and response times with color contrast-enhancing tints. Participants completed visual tasks involving color patches with varying contrast patterns while wearing Revision Military tinted eyewear. Two experiments were conducted to evaluate performance across different lens configurations. Experiment 1 involved adjusting spatial frequencies of stimuli to determine visibility thresholds for each eyewear type. Results showed variations in contrast sensitivity thresholds across color pairs and eyewear configurations. Lightly tinted lenses enhanced thresholds compared to clear lenses, while dark-tinted lenses outperformed smoke lenses. Tinted eyewear generally increased contrast sensitivity thresholds, highlighting its impact on visual sensitivity. Experiment 2 measured participants' contrast sensitivity thresholds without eyewear. These thresholds were then used as input stimuli in a 4-alternative forced-choice (4-AFC) experiment. Participants identified patches containing grating patterns while wearing different eyewear configurations. This experiment evaluated whether specific eyewear enhances visibility and optimizes color contrast for improved visual performance. The findings reveal that tinted eyewear improves contrast sensitivity thresholds and visual performance in tasks involving spatial-chromatic patterns. This research underscores the potential of color-enhancing eyewear to enhance human visual perception, with applications in sports, military operations, and everyday visual tasks. By identifying how specific tints affect spatial-chromatic contrast sensitivity, the study provides valuable insights for the development of eyewear designed to optimize visual performance.

Likhitha Nagahanumaiah earned her Bachelor of Science in Electronics and Communication from Sri Krishnarajendra Silver Jubilee Technological Institute (SKSJTI), India. She then completed her Master's degree in Electrical Engineering with a focus on Signals and Image Processing at the Rochester Institute of Technology (RIT). Currently, she is pursuing a PhD in Color Science at RIT. Likhitha 蘗s research lies at the crossroads of color science and advanced imaging technologies, emphasizing color image analysis, image quality evaluation, and mixed reality systems. Her work incorporates computer vision techniques to study color perception and its practical applications in diverse contexts. She is particularly interested in exploring how traditional principles of color science can be integrated into emerging technologies like augmented and virtual reality. Through her academic and research endeavors, Likhitha is committed to deepening the understanding of color and image quality, aiming to contribute meaningful advancements to both scientific exploration and the development of next-generation visual technologies. 

Color as Cultural Code: From Psychedelia to Brat Summer
Zena O'Connor, Design Research Associates

The use of color as a form of code within a broader lexicon of visually based signifiers has a long tradition dating back centuries. Specifically, color became one of several non-verbal visual cues employed to denote tribal affiliation in pre-literate cultures and an early example was the Picts, a tribe that used woad to paint their bodies a bluish color. This practice enabled the Picts to effectively differentiate themselves from other tribes and to also present a fearful countenance to adversaries prompting Julius Caesar to comment on their intimidating appearance in battle. Whilst it's not always the most reliable lingua franca, color can act as a highly effective signifier under certain circumstances and contexts, immediately conveying important information in a cultural setting. This paper explores three key examples, all of which are instantly recognizable from the colors or color palette associated with the examples. The effectiveness of these examples is illustrated by the way in which the colors serve specific purposes, indicate cultural affiliations, and act as a disruptive visual barrier to others. In addition, one of the examples discussed herein has become instantly recognizable many decades after it first emerged last century.

Zena O'Connor is one of a handful of people with a PhD that investigated the interface between color and human response (University of Sydney, Faculty of Architecture). A designer by training, Zena is an evidence-based color design research consultant and for the last twenty years, she has worked on a range of projects that focus on providing insight, validation and color strategies for applied design and design of the built environment including strategies to improve environmental visual literacy in healthcare, aged care projects and the built environment; data visualization; branding and logo design. Clients include Aevum Limited (AUS), Auckland District Health Board, Auckland City Hospital, Bupa (NZ), Deicke Richards Architects (AUS), Greene King (UK), Haugstad M bel (Norway), Klein Architects (Auckland), Mirvac (AUS), New Zealand Health Design Council, Norna AI (Sweden), Oslo Planning Dept. (Norway), and Suncorp Bank (AUS). In addition, Zena has lectured in applied color and theories of color at the University of Sydney (Faculty of Architecture), University of NSW (Art & Design) and Sydney Design School. She has published 80+ peer-reviewed books, book chapters, academic articles and conference papers and was awarded the Resene Color Maestro Prize (2017) for a community-based color installation in Sydney, Australia. 


Chromatic Adaptation in Displays: The Influence of Ambient Environment
Eddie Pei, PoCS/MCSL RIT
Susan Farnand
Mark Fairchild

Chromatic adaptation to displays involves a complex interplay of factors, including surrounding light, background properties, and the luminance of stimuli. This study was aimed at understanding the impact of these factors on visual perception through a series of psychophysical experiments. Our findings show a significant interaction between background and surround conditions in adaptation responses. Two frameworks are proposed to capture the mixed adaptation mechanism. These models show promising results in predicting visual perception responses under various ambient environment conditions. This research offers insight into understanding chromatic adaptation in such complex conditions.

Performance Analysis of Deep Learning Architectures in Reconstruction of Overexposed Images
Alireza Rabbanifar, Munsell Color Science Labortory (RIT)
Mekides Assefa Abebe, Munsell Color Science Labortory (RIT)
Elena Fedorovskaya, Munsell Color Science Labortory (RIT)

Over-exposure is a common issue in images captured under challenging lighting conditions, resulting in significant loss of detail and color degradation. This degradation negatively impacts the overall quality of the images and their usability in various applications. Deep learning techniques have shown state-of-the-art performance in addressing these challenges by restoring details and colors in overexposed images. Different architectures, mainly based on Convolutional Networks, Autoencoders, and Generative Adversarial Networks have been proposed for this task, utilizing various loss functions tailored to enhance recovery capabilities. In this study, we focus on analyzing and understanding the color and detail recovery capabilities of different deep learning architectures, including Vision Transformers and Diffusion Models, which have not been explored for over-exposure correction before. A representative model from each architecture type is trained on a custom HDR image dataset where over-exposure is introduced through tone mapping. The trained models will be evaluated for reverse tone mapping applications, where the dynamic ranges of the enhanced images are expanded to enhance visual experiences on HDR displays, improving the clarity and depth of visual information. Objective evaluation is conducted using metrics such as RMSE, SSIM, and the perceptual feature similarity metric (based on VGG model). Subjective performance will be evaluated through a psychophysical experiment in which human observers rate the quality of reverse tone-mapped HDR images displayed on an HDR monitor. By combining objective and subjective evaluations, this work provides a comprehensive analysis of deep learning models for over-exposure correction, offering insights into their architectural strengths and limitations, and highlighting their contributions to practical applications in HDR technology and visualization.

Multisensory Color-Haptic Interaction in Augmented Reality First
Alireza Rabbanifar, RIT

Pratheep Kumar Chelladurai, RIT
Mekides Assefa Abebe, RIT

Visual perception of color and tactile perception of texture work together to create a unified experience of objects and surfaces. Research suggests that color, as perceived visually, can influence tactile sensory processing and affect how we perceive touch. Studies have shown that people often associate certain colors with specific tactile attributes, such as smoothness or softness. These interaction effects are especially important for accurately identifying objects in immersive virtual and augmented reality environments, where users engage with virtual or mixed worlds using multiple senses. To investigate color-haptic interactions in a mixed reality environment, we asked participants wearing augmented reality headsets to touch several texture samples that were concealed from their view. They were then asked to rate the perceived similarity of these textures to visually presented images of textures that varied in color. The specific textures and colors were selected based on a previous study (Rabbanifar et al., 2024). We hypothesized that visually presented images of "rougher" textures with smooth colors would be judged as more similar to "smoother" textures perceived by touch. The results of the experiment will be presented.

Using Generative AI for Data Color Scheme Suggestion
Theresa-Marie Rhyne, Color Maven - Visualization Consultant 

This presentation focuses on the use of Generative AI systems to provide color scheme suggestions in data visualization. There are three classifications or types of data color schemes: sequential for a logical progression of data; diverging to emphasize a break point in the data and qualitative to provide distinct colors for labeling data. ChatGPT, Google Gemini & Microsoft Copilot can provide data color scheme suggestions in color Hex code format. A color hex code is a way to represent colors using hexadecimal values. It's commonly used in digital applications. The code is a six-digit combination of numbers and letters, preceded by a hash symbol (#). Each pair of digits represents the intensity of red, green, and blue in the color, respectively. The process of asking a Generative AI system for color scheme suggestions in color Hex code format is demonstrated. Next, the color Hex code sequence is visualized with a color mapping tool like the Adobe Color app. From there, errors in the sequences are detected and color blindness (deficiency) tests are performed. The finalized color scheme is then applied to a data visualization. Polished colorized data visualizations with the Generative AI color scheme suggestions are shown.

Theresa-Marie Rhyne has over three decades of experience in producing and colorizing digital media and visualization. In December 2024, CRC Press published the second edition of her book on "Applying Color Theory to Digital Media and Visualization" that includes her five stage process of colorizing a data visualization. She has consulted with the Stanford University Visualization Group on a color suggestion prototype system, the Center for Visualization at the University of California at Davis, and the Scientific Computing and Imaging Institute at the University of Utah on applying color theory to ensemble data visualization. Prior to her consulting work, she founded two visualization centers: (1) the United States Environmental Protection Agency's Scientific Visualization Center in the 1990s and (2) the Center for Visualization and Analytics at North Carolina State University in the 2000s. In 2023, she received an IEEE Computer Society Distinguished Contributor Award. She is currently exploring and writing on the use of Generative AI for color scheme suggestion. 

MISHA.br, Bringing MISHA RIT Technology to São Paulo, Brazil
Júlia Schenatto, IF USP
Maria Fernanda Pilotto Brandi, FAU USP
Bianca Fonseca, FAU USP

The presentation introduces the MISHA.br initiative, covering the event's organization, prototype development, and communication challenges in adapting the technology from the United States to Brazil, focusing on S υ Paulo city. The Rochester Institute of Technology (RIT) developed the MISHA (Multispectral Imaging System for Historical Artifacts), relevant to several fields. The MISHA.br initiative fosters discussions on cultural heritage preservation and new chromaticity technologies, facilitating research exchanges between Brazilian and international institutions. The event and Brazilian prototype were developed with three University of S υ Paulo (USP) units. The "MISHA.br Meeting: Technology, Chromaticity, and Heritage," held in October 2024 both in-person and online, launched the Brazilian prototype. It was supported by USP's Pro-Rectory of Research and Innovation (PRPI USP) and organized by the National Council for Scientific and Technological Development (CNPq) group "Color, Architecture, and City" from the School of Architecture and Urbanism and the School of Design (FAU), along with the Institute of Physics (IF), the Faculty of Philosophy, Letters, and Human Sciences (FFLCH), and a PRPI USP scholarship holder. Embira (FFLCH USP) and the Laboratory of Archaeometry and Applied Sciences to Cultural Heritage (LACAPC IF USP) developed the prototype. The initiative promotes interdisciplinary heritage discussions, sharing research across higher education institutions. The team plans to expand the prototype's use and integrate more professionals and researchers from other universities following the meeting.

Optimizing Material Segmentation Using FCN-ResNet101: The Role of Color-Based Data Augmentations with the COCO Dataset
Soroush Shahbaznejad, Rochester Institute of Technology
Mekides Assefa Abebe, Rochester Institute of Technology
Michael Murdoch, Rochester Institute of Technology

Material segmentation in complex, real-world environments rely heavily on a model's ability to adapt to diverse visual conditions, such as subtle color casts, uneven lighting, and unpredictable backgrounds. This study examines the impact of targeted, color-based data augmentations on the performance of an FCN-ResNet101 semantic segmentation model trained on various material categories selected and classified from the COCO dataset.We analyze the effects of systematically varying brightness, contrast, hue, saturation, and gamma augmentations on the model's performance, assessed using metrics such as precision, recall, Intersection-over-Union (IoU), and Binary Cross-Entropy (BCE) loss. Additionally, we delve into the feature representations of different material types within the trained model and how they vary with the evaluated color augmentations. The model's performance with respect to changes in material representation will also be presented. The findings of our study will help highlight the importance of carefully selected color augmentations in enhancing segmentation accuracy for different material categories. This approach enables models to better capture the richness and diversity of visual environments.

Soroush Shahbaznejad received his B.Sc. and M.Sc. in Textile Engineering (Textile Chemistry and Fiber Science) from the University of Guilan and Amirkabir University of Technology, respectively, where his thesis explored perceptual effects of near-gray backgrounds on fluorescent color samples. He is now a second-year Ph.D. student in the Munsell Color Science Laboratory at Rochester Institute of Technology, focusing on color science, spectral data processing, color appearance models, and computer vision. 

Perceptual Thresholds of Facial Lighting in Emissive and AR Displays
Xinmiao Zhang, Rochester Institute of Technology

Sofie R. Herbeck, Rochester Institute of Technology
Christopher Thorstenson, Rochester Institute of Technology

Augmented Reality (AR) aims to integrate virtual content with the "real-world" to enhance, or "augment", perception and interaction with our environment. One existing challenge exists such that it is likely that "real" and virtual content often have disparate lighting conditions, which may be detrimental to AR experiences when merged together. However, it is not currently known "how" different these lighting conditions need to be before people can discriminate them, or before it becomes detrimental to user experiences. Further, when displaying faces, a promising application of AR technology, this challenge can affect skin tones disproportionately, due to differences in light rendering and some AR optical limitations. In this study, two psychophysical experiments are proposed to assess participants' perception of simulated lighting conditions applied to rendered backgrounds (simulating the real surroundings) and virtual faces (simulating external AR content). The experiments will assess the effects of different skin tones, lighting conditions, and AR display methods on visual thresholds for perceiving lighting matches, mismatches, and preferences. These findings will provide new insights into improving the perceptual consistency and inclusivity of face rendering in AR systems, particularly with regard to how lighting conditions interact with different skin tones to affect user experiences

Xinmiao Zhang
 is a Ph.D. student at the Munsell Color Science Laboratory at the Rochester Institute of Technology in Rochester, NY. She received her BS in Software Engineering from Harbin Normal University. Xinmiao's current research focuses on the intersection of augmented reality (AR) and human perception. 


Evaluating the Influence of Eyewear on Perception of Small Color Difference in Reflective Samples
Shuyi Zhao, Rochester Institute of Technology
Christopher Thorstenson, Rochester Institute of Technology

Tinted eyewear is increasingly utilized in outdoor environments to protect against ultraviolet radiation and manage perceived luminance levels. While these protective functions are well-established, these modifications can affect the perception of critical color details like traffic signals. Although previous studies have examined color perception through tinted eyewear using standardized tests like the D-15, the ability to distinguish small color differences remains insufficiently studied. This research investigates how different tinted eyewear affects observers' ability to distinguish small color differences in reflective samples, with implications for understanding how specific eyewear designs influence color discrimination performance. A comprehensive evaluation was conducted using two stimulus sets: (1) six adjacent Munsell sample pairs varying only in hue, and (2) seven parameric pairs generated through Kubelka-Munk theory modeling of 16 pigments. These pairs were designed to have similar CIELAB values but different spectral reflectance, matching the reflectance of common environmental colors from the Macbeth Color Checker (skin tone, sky, foliage, blue flower). Six types of eyewear (Clear, Smoke, Verso, Alto, Umbra, Oakley) were examined in this study. Color differences (~2000) were predicted by utilizing the measured radiance of the light source, and the reflectance (R ? of the stimuli. These predictions were validated through psychophysical experiments with 27 observers using a scaling method. Results demonstrate that tinted eyewear can alter color discrimination compared to neutral (Clear, Smoke) eyewear, which varies based on the eyewear's transmittance and the stimuli's reflectance properties. For example, one foliage pair showed a 2000 of 2.37 under Clear eyewear that increased to 5.21 under Verso eyewear, with corresponding mean observed differences of 2.79 and 5.51, respectively. Overall, the observed color difference evaluations aligned with predictions, with correlation coefficients (r) of 0.816. This research enhances the understanding of how tinted eyewear affects color perception and provides a methodology for evaluating color discrimination performance.

Shuyi Zhao is a Ph.D. candidate in Color Science at the Rochester Institute of Technology (RIT), where her research focuses on human color vision, deepening the understanding of 3D object perception, and exploring the color reproduction of 3D printing. She earned a Master's degree in Additive Manufacturing and 3D Printing from the University of Nottingham, where her thesis, supervised by Prof. Lyudmila Turyanska and Prof. Geoffrey Rivers, explored the integration of optically active nanoscale materials with inkjet printing technology. Shuyi holds a Bachelor of Science in Printing Engineering from the Beijing Institute of Graphic Communication. During her undergraduate studies, she spent a year as an exchange student at the University of Leeds, majoring in Graphic and Communication. As part of her bachelor's thesis, supervised by Prof. Min Huang, she contributed to a study on color difference calculation methods for 3D-printed sphere samples, which was published in Acta Optica Sinica. Her academic interests lie at the intersection of color science, additive manufacturing, and visual perception, with a particular focus on advancing the understanding of 3D color perception and reproduction. 

About ISCC and Color Impact 2025

The Inter-Society Color Council is the principal interdisciplinary society in the United States dedicated to advancing color research and best practices in industry, design/arts, and education.

ColorImpact 2025 promises to be a significant event for color professionals worldwide. Registration for the conference will open in the first quarter of 2025.


Subscribe to our mailing list

Subscribe to our mailing list to receive information about our upcoming events.

Subscribe

COPYRIGHT © INTER-SOCIETY COLOR COUNCIL  | SITEMAP

Powered by Wild Apricot Membership Software