Automa Blog

Showing posts tagged with: company, Show all

Happy New Year 2015!

We would like to wish all our users, fans & customers all the best in the New Year. We hope 2015 will be prosperous for everyone! In particular we would like to wish you many successes in the field of testing & automation, hoping that our tools can help you achieve what you need.

Happy New Year 2015

Welcoming our latest customer: Esri Inc

Logo of Esri

We are happy to announce that after renowned companies like Intel and Samsung, we have another global specialist as one of Automa's customers: Esri specialises in GIS solutions. Esri connects governments, industry leaders, academics, and nongovernmental organizations with the analytic knowledge they need to make critical decisions that shape our planet.

If you have not tried Automa yourself yet, we encourage you to give it a go! It's so simple, everybody can do it. Simply go to our download page, fill in your details and click Download. You will receive a Windows executable which you can immediately run, without prior installation.

Happy automating!

Welcoming UC Berkeley as Automa's latest customer

Logo of the University of California, Berkeley

After high-profile companies like Intel and Samsung, we are very proud to announce that we now also have the highly-renowned University of California, Berkeley as one of our customers.

UC Berkeley are using Automa to automate the testing of some applications developed in-house. The fact that Automa is so lightweight and does not require installation makes it very easy to integrate into Berkeley's existing infrastructure.

For a selection of some of Automa's other customers, please visit our home page (scroll down for the selection of customers).

The Bug Hunt Game at EuroSTAR 2013

Update 05/11/2013: The Bug Hunt Game will take place on Wednesday, Nov 6 in the Community Hub at EuroSTAR. Come find us!

As explained in a previous blog post, we won two places for this year's EuroSTAR testing conference in Gothenburg, Sweden. We can't wait to attend this great event, and have prepared a special game for the other attendees, to let them relax, have fun and meet new people.

The rules of the game are the following: Each player is either a "bug" or a "bug hunter". Players receive badges that show which of the two they are. When a bug and a bug hunter meet, they play rock paper scissors. The winner takes one of the other player's lives. A scoreboard will be kept and at the end of the day the winner (ie. the person with the most lives) will be announced. We're curious to see whether it will be a bug or a bug hunter!

The game was developed during a workshop by Oana Juncu at Agile Testing Days 2013, in close collaboration with Jesper Lottosen (@jlottosen on Twitter).

The game will most likely take place on Wednesday or Thursday. You can sign up for it in the EuroSTAR Community Hub. We will publish further details here and via our Twitter account.

See you at the conference and, as always, happy automating! :-)

Upcoming Testing Conferences

Agile Testing Days 2013 Logo EuroSTAR 2013 Logo Software Quality Days 2014 Logo

One of the things we have been busy with over the past weeks was registering for some of the most important upcoming testing conferences in Europe. They represent an ideal opportunity for us to meet our users (you!) in person, and learn first hand about their needs and requirements. The way to these conferences was not always easy, so read on for some experiences of a startup trying to get into some of the most prestigious testing events in Europe.

Agile Testing Days, October 28-31, 2013 in Potsdam / Berlin, Germany

The Agile Testing Days are one of Europe's biggest testing events, with many speakers that are prominent in the Agile Testing community. The organizers are very friendly, quick and precise in their work and were happy to cater for the needs of a startup. Their team have been extremely pleasant and helpful in our communications and we are very much looking forward to attending the conference. We will be there on October 30th. If you will be there too and would like to chat, give us a shout on our twitter page!.

EuroSTAR Software Testing Conference, November 4-7, 2013 in Gothenburg, Sweden

EuroSTAR is unquestionably the biggest software testing event in Europe. This doesn't prevent it from being leading edge: This year's conference theme is "Questioning Testing" and applies testing to Testing itself. Why do some techniques work well in some situations, and not so well in others? How can you give your clients a credible account of your (testing) work? The critical look at the field itself in this year's conference promises to be both challenging and very interesting.

Our way to EuroSTAR

It took us several attempts to get a place at EuroSTAR. As a startup, we need to monitor our expenses very tightly, so we were looking for an affordable way of attending this great testing event.

Our first attempt consisted of submitting a talk proposal to EuroSTAR. If successful, the proposal would have allowed us to attend EuroSTAR for free. Luckily, our proposal was not accepted. Why luckily? Because our proposal wasn't very good and sparked a very interesting discussion with the programme chair of this year's EuroSTAR, Michael Bolton.

When our proposal was not accepted, the programme chair Michael Bolton sent us an email asking whether we would like more detailed feedback on our submission. Of course, we said yes. Michael raised several very valid points about our proposal, and even took the time to answer more detailed follow-up questions that came up in the ensuing discussion. We were very impressed with this, because being the conference chair, Michael had to deal with literally hundreds of similarly rejected submissions.

A little bit later, we were invited to write a guest post on the EuroSTAR blog. We used this opportunity to write about the success of our unsuccessful submission to EuroSTAR. The blog post was received very well, and garnered an unusually high amount of encouraging comments. You can find it on the EuroSTAR blog, here.

The team behind EuroSTAR are very dedicated to the community and are offering several competitions in which you can win a free ticket for the conference. We took part in several of these competitions with little success, until another competition hosted by the Ministry of Testing together with EuroSTAR was announced.

The goal of the new competition was to take a selfie photo. The best three photos were to be selected by Rosie Sherry and her team at the Ministry of Testing. The winner was then to be chosen by a public vote.

We knew this was our best chance to attend EuroSTAR, so we brainstormed about what the best selfie photo would be to win. Eventually, we came up with the idea of taking a picture of one of us with a prominent figure, in Madame Tussauds in Vienna. To top it off, we decided to wear a self-printed T-shirt with the Ministry of Testing logo on it, to ensure we really made it into the final. In the end, we chose two selfie pictures as our favorites: One with the Queen and one with 007.

The picture with 007 got selected into the final. What ensued was a week of constantly asking (spamming) our friends on all possible channels to vote for us. The competition ended last Friday, and we were very happy to be notified on Sunday by Rosie Sherry that we had won. We are extremely grateful to Rosie and the Ministry of Testing for organizing this competition, and the EuroSTAR team for their dedication to the community. They are really hard at work at making this an unmissable event. We can't wait to attend.

All that's left now is to book flights and accommodation for our stay in Gothenburg. If you know of an affordable place to stay during the conference, it would be great if you could let us know in the comments below! Please, also let us know if you would like to have a chat at the conference, either in the comments below or on Twitter.

Software Quality Days, Jannuary 14-16, 2014 in Vienna, Austria

The Software Quality Days is an annual event in Vienna that has steadily been becoming more and more important in recent years. Also their organizing team are very friendly and helpful, and have been very supportive in enabling us to not only attend but exhibit. We will be presenting our next generation GUI automation tool Automa there. We can't wait to present our tool, and answer any questions from the audience there.

There is one main reason for us to go to these conferences: To meet fellow testers and our users. We are new to all of the above events and would love to catch up with anyone who would like to get to know us a little. If you do, and are at one of these conferences, please just leave a comment below, or ping us on our Twitter page.

Happy automating! And see you soon :-)

Automa 2.0: Speed up your scripts!

It's been a long way, but we finally made it: Version 2.0 of our next generation GUI automation tool Automa is live. It's time for a little recap...

Full-time work on Automa began in April 2012. The first basic version was published in October of the same year. It lacked many features, but was enough to give out to users and get feedback on. This - your! - feedback has been driving Automa's development ever since: We made Automa's support for international characters ö, ä, ê, é etc. incredibly simple. We gave real-life examples and videos of the things you can do with Automa and refined its API to truly make it the simplest GUI automation tool out there. We won a prize and went to CeBIT. At the same time, we added WPF support and made Automa 66% faster. Then came the fine-tuning: improved window handling, more commands, Java support, more detailed search, even more commands and finally, much improved image search algorithms. Phew!

With all these features already implemented, you might ask "what's next?" Well, how about the ability to:

Speed up your Automa scripts by 40%

The new version of Automa adds a Config class that can be used to fine-tune Automa's behaviour. For example:

Config.auto_wait_enabled = False

This line tells Automa to disable the auto-wait functionality. When auto-wait is enabled, Automa automatically waits after performing GUI actions, such as start, click or write. The amount of time waited is calculated dynamically, using criteria such as the current CPU usage. This makes Automa scripts very stable, however Automa is conservative when it comes to deciding how long to wait and so might sometimes wait longer than absolutely necessary. To speed up the execution of your scripts, the property auto_wait_enabled can thus be used to disable the automatic wait facility. This is useful in cases where you know that waiting is not required, or when you already are synchronizing your script with the command wait_until.

Another configuration value exposed by the Config class is wait_interval_secs:

Config.wait_interval_secs = 0.3

When auto_wait_enabled is true and Automa waits after performing a GUI action, wait_interval_secs determines how often Automa checks whether the action has completed. Setting this property to a lower value such as for instance 0.1 can thus also be used to speed up you scripts.

The results? We used the above properties to fine-tune and selectively disable auto-wait, like so:

 
>>> Config.auto_wait_enabled = False
>>> write("John", into="Name")
>>> press(TAB)
>>> write("Smith")
>>> Config.auto_wait_enabled = True
>>> click("Submit")

Similar modifications were performed on all of Automa's regression test scripts. This reduced the time taken to run the scripts from 14 minutes to just under 10 minutes, a speed improvement of roughly 40%!

We also added several other small but useful properties to the new Config class. You can read all about them on our documentation page.

New commands for scrolling

Just for the sake of completeness, we should mention that we added commands that allow you to simulate scrolling the mouse wheel: scroll_up, scroll_down, scroll_left, scroll_right. You can find more information about them on our documentation page.

As always, we'd be very happy to hear your comments or questions in the "Comments" section below.

Happy automating!

Our Definition of Software Quality

There's a recent interesting article by Gerald Weinberg about the definition of software quality. In this article, the author argues that the quality of software is always subjective and involves often conflicting properties. Some of the most familiar such properties are:

  • Zero defects
  • Lots of features
  • High performance
  • Elegant coding
  • Low development costs
  • User-friendliness

If you've been looking at our homepage, you will know that we make it our mission to develop frameworks and tools that help other IT companies develop better software. Given the subjectivity of software quality as described above, what, then, does "better" mean for us? Here's our pick:

No defects

Given our name BugFree Software, it should come as no surprise that being defect-free is one of our core values. There are several reasons for this. First, bugs are at best annoying for our users and at worst make our products unusable - and thus unsellable. Second, bugs are annoying for us: They disrupt our development, incur administrative overhead and are expensive to fix (even more so when they're fixed late). The wrong attitude towards even small bugs can significantly impair companies in the long run, as for instance Elisabeth Hendrickson explains. We view the absence of bugs as a foundational property of high-quality software. That is why we kill all bugs dead.

Exceeding user expectations

When somebody uses your software, they have a mental model of what it should do and how it should react to various actions. If your software meets these expectations your users will be comfortable using it. If your software does not respond to the actions of your users in the way they would have expected it to respond, for instance because of a bug, your users will be put off.

What's even better than simply meeting the expectations of your users is to moreover positively surprise them in situations where they would not strictly have expected but hoped for your application to work in a particular way. If you manage to meet your users' expectations in most and positively exceed those expectations in a few other cases, your users will be very grateful and enthusiastic about your product. This is something Apple is very good at.

"Doing the right thing" is one of the main goals of Automa's intelligent automation API. An example of this principle in action in Automa is given by the following code (executed inside a file dialog):

click("New Folder")
rightclick("New Folder", select="Properties")

Here's a video of this behaviour (an excerpt of the Automa video on our starting page):

Automa's Contextual Intelligence

As you can see in the video, Automa is smart enough to not right-click the "New Folder" button but rather the newly created folder - just like a human would do.

Doing one thing well

Trying to do too many things in an application leads to a convoluted user interace, bugs and half-baked solutions when it comes to the branches in the application's functionality tree. Instead of half-heartedly doing everything, we strive to focus. This is a clear trade-off with respect to the item "Lots of features" mentioned in the list at the beginning of this post. As a return for the trade-off, we can more easily achieve our goals of "No defects" and "Exceeding user expectations" while keeping development costs low.

A decision where striving to do one thing well played a key role was when we chose to release the first version of Automa with a simple command-line console as an interface to the underlying Python API. We could have first written an IDE for Automa, with syntax highlighting, version control integration, a debugger etc. However, such a solution would have never been as complete, well-documented or stable as any of the many existing Python IDEs.

Focusing on the above three properties has several nice side-effects. The "Zero defects" policy minimizes technical debt and in combination with comprehensive test automation and "Doing one thing well" keeps development and maintenance costs low. Decreased costs are also reaped by our customers: Zero defects lets our users simply enjoy the advantages of our product without getting side-tracked by broken functionality. We also argue that "Doing one thing well" is more cost-effective for our users: It allows us to competitively price our products while easily allowing our customers to integrate our tool into their existing environment. Finally, while "User-friendliness" is not solely comprised of "Zero defects", "Exceeding user expectations" and "Doing one thing well", we believe that the latter go a long way towards achieving the former. Nevertheless, user-friendliness is constantly in our minds when making decisions about Automa's implementation.

We have already mentioned "Lots of features" as one notion of quality we are compromising on. The other, though to a lesser extent, is "High performance": Automa's high-level approach and its efforts at making intelligent choices like a human being require more computational resources than other automation tools. This is resemblant of the difference between programming in an assembly language and a more high-level language such as C/C++ or even Java/Python. The former is very fast and gives you complete control over everything that is happening, however it also forces you to deal with a lot of technical detail, which is a significant barrier to overcome before you can achieve your goals. The latter approach is slower but deals with a lot of the technical detail for you, thus freeing you to focus on solving your actual problem. We do keep an eye on performance, but firmly place Automa in the latter category.

In summary, choosing a definition of quality always involves a trade-off between properties, of which some are in conflict while others reinforce each other. The above described trade-off of emphasizing "Doing one thing well", "Exceeding user expectations" and "No defects" while compromising on "Lots of features" and, to some extent, "High performance" has positive effects on "User-friendliness" as well as "Low development costs". For us at BugFree Software, this prioritization hits just the sweet spot in the balance of properties of high quality software. However, this is of course heavily influenced by the fact that we are a start-up with no legacy code, whose products are non-mission-critical desktop applications. Depending on your situation, your definition of quality may be very different. Whether yes or no - tell us in the comments below!

Kill Bugs Dead

While our name BugFree Software certainly represents one of our core values, it goes without saying that all software of any appreciable complexity, including our products, will contain bugs. All we can do - what we have to do - is to fix any bugs we find and make sure they never return again.

We recently had a bug where Automa would fail to start without displaying an error message to the user when the installation directory was not writeable. This would for instance occur when the user installed Automa with admin privileges to a folder requiring such privileges, but then started Automa without an elevated account. Since we require Automa's installation directory to be writeable, the fix to the problem was to display an error message to the user that says that he might have to start Automa with administrator privileges:

Error message for non-writeable installation directory

In the spirit of acceptance test driven development, the first step in fixing the above problem consisted of writing an automated system test that captures the erroneous behaviour. The backbone of our system tests for Automa is a little Python class that allows sending inputs to and checking outputs from a console process:

class ApplicationConsoleTester(object):
	def send_input(self, input):
		# Implementation details hidden...
	def expect_output(self, expected_output):
		# Implementation details hidden...

You (roughly) use it like so:

cmd = ApplicationConsoleTester("cmd")
cmd.send_input("echo Hello World!")
cmd.expect_output("Hello World!")

When the expected output is not received within a given timeout, an AssertionError is raised. This makes it very easy to use ApplicationConsoleTester in conjunction with one of the unit testing frameworks available for Python.

To highlight the above bug, we wrote the following Python TestCase:

class NonWriteableInstallDirST(unittest.TestCase):
    # Some implementation details hidden...
    def test_error_message_for_non_writeable_install_dir(self):
        self.automa_install_dir.set_writeable(False)
        automa_tester = ApplicationConsoleTester("Automa.exe")
        automa_tester.expect_output(
            "Cannot write to Automa's installation directory. If you "
            "installed Automa with administrator privileges then you "
            "might also have to start Automa with those privileges.\n"
            "\nType Enter to quit."
        )
        sleep(5)
        self.assertFalse(
            automa_tester.hasTerminated(), 
            "Automa did not give the user enough time to "
            "see the error message."
        )
        automa_tester.send_input("\n")
        self.assertTrue(automa_tester.hasTerminated())

Once we had written the test and seen it fail (in the style of good (acceptance)-test driven development), it was easy to fix the bug.

Having the test automated allows us to execute it every time Automa is built. This ensures that the bug will never occur again. But there is another benefit of having such a system test: By discovering and fixing the bug, we have effectively enriched Automa's feature set. Keeping the system tests in sync with development means that the system test suite forms a comprehensive, up-to-date documentation of the required functionality. There is no more partial knowledge as to what works and what doesn't, possibly spread amongst multiple individuals. There is only one truth: that determined by the tests.