Functional tests are good at catching broken logic: a button that doesn’t respond, a screen that doesn’t load, a field that rejects valid input. But they’re blind to a whole category of bugs - the ones you can only see.
Visual Testing closes that gap. It compares screenshots across runs and flags unexpected changes. This approach, often called screenshot testing or sometimes visual regression testing, makes visual differences measurable.
Two new additions to your toolkit:
assertScreenshot compares the current screen, or a cropped portion of it, against a saved baseline. If the match falls below the threshold, the test fails.
The default threshold is 95%, meaning up to 5% of pixels can differ before the assertion fails. You can adjust this per assertion using thresholdPercentage.
cropOn on takeScreenshot narrows it to a specific element or container using a Maestro selector. This makes screenshot testing practical for real apps. Isolate the part of the screen you care about and ignore parts of the UI that naturally change between runs, such as the status bar clock, timestamps, or user-specific avatars.
The simplest possible test: run takeScreenshot first to create the baseline.
- takeScreenshot: MainScreen
After a new release, run assertScreenshot to compare the current screen against that baseline:
- assertScreenshot: MainScreen.png
A more precise version - cropped to a specific element, with a custom threshold:
- takeScreenshot:
path: ProductCard
cropOn:
id: ProductCardContainer
Then assert after your next release:
- assertScreenshot:
path: ProductCard.png
cropOn:
id: ProductCardContainer
thresholdPercentage: 98
The default thresholdPercentage of 95.0 is a practical starting point for most screens. Increase it to 98 or 99 for pixel-sensitive components like charts or custom illustrations. Lower it slightly for screens with subtle dynamic content that cannot be fully cropped.
When a test fails, Maestro generates a diff so you can quickly see what changed.
Full docs: assertScreenshot · takeScreenshot