Selenium Java tutorial

1. 🤖 How it works

Applitools SDKs works with existing test frameworks and simply takes screenshots of the page, element, region or an iframe and uploads them along with DOM snapshots to our Eyes server. Our AI then compares them with previous test executions' screenshots (aka Baselines) and tells if there is a bug or not. It's that simple!

Applitools AI with RCA picture

#1.1 Baseline vs. Checkpoint images

When you first run the test, our A.I. server simply stores those first set of screenshots as Baseline images. When you run the same test again (and everytime there after), the A.I. server will compare the new set of screenshots, aka Checkpoint images, with the corresponding Baseline images and higlights differences in pink color.

Baseline vs Checkpoint
The picture above is showing the Side-by-Side view of the baseline and checkpoint images

#1.2 Marking the test as "Pass" or "Fail"

When the AI compares the baseline and the checkpoint image, if it finds a legitimate difference, it'll mark the test as Unresolved. This is because the AI doesn't know if the difference is because of a new feature or a real bug and will wait for you to manually mark it as a Pass/Fail for the 1st time.

If you mark the unresolved checkpoint image as a "Fail", any further runs with similar difference will be automatically marked as "Failed".

Mark the checkpoint as a fail
The picture above is showing how to mark the checkpoint image as Failed

If you mark the unresolved checkpoint image as a "Pass", then it means that the difference is due to a new feature and so we update the new checkpoint image as the new baseline and mark the current test as Pass. And going forward we'll compare any future tests with this new baseline.

Mark the checkpoint as a Pass
The picture above is showing how to mark the checkpoint image as Passed

Note:

  • Applitools AI has been trained with 100s of millions of images. It doesn't do a pixel-by-pixel comparison because it leads to a lot of false positives, but instead simulates real human eyes and ignore normal differences that humans would ignore and only highlight those that humans would highlight as bugs.

  • ACCURACY: A.I's current accuracy rate is 99.9999%! Which means for most applications that odds that you'll see false-positives is 1 in a million!

#A powerful test results dashboard

We provide a state-of-the-art dashboard that makes it very easy for you to analyze differences, report bugs straight from the dashboard and so on.

Seeing test result summary
The picture above is showing the summary view

2. 🖼 Analyzing differences3. 🐞 Reporting bugs (straight into Jira or Github)4. ✅ Prerequisites5.1 🚀 - Run the existing demo app5.2 🤓 - Add Applitools to an existing project6. 🚀 Try Visual Grid 🔥

#Resources

posted @   小强找BUG  阅读(167)  评论(0编辑  收藏  举报
编辑推荐:
· Linux系列:如何用heaptrack跟踪.NET程序的非托管内存泄露
· 开发者必知的日志记录最佳实践
· SQL Server 2025 AI相关能力初探
· Linux系列:如何用 C#调用 C方法造成内存泄露
· AI与.NET技术实操系列(二):开始使用ML.NET
阅读排行:
· 被坑几百块钱后,我竟然真的恢复了删除的微信聊天记录!
· 没有Manus邀请码?试试免邀请码的MGX或者开源的OpenManus吧
· 【自荐】一款简洁、开源的在线白板工具 Drawnix
· 园子的第一款AI主题卫衣上架——"HELLO! HOW CAN I ASSIST YOU TODAY
· Docker 太简单,K8s 太复杂?w7panel 让容器管理更轻松!
点击右上角即可分享
微信分享提示