Top 5 Automation Testing Tools for 2023

Tool #1: Selenium

Selenium is an open-source automation testing tool that allows testers to automate web applications across various browsers and platforms. It provides a range of programming languages such as Java, Python, Ruby, C#, and more, for writing test scripts. Selenium is highly flexible and customizable, and can be easily integrated with other tools such as Maven, Jenkins, and TestNG.

Pros and cons of the tool:

- Pros:

- Open-source and free to use

- Cross-browser and cross-platform testing

- Support for multiple programming languages

- Wide range of integrations and plugins available

- Cons:

- Requires technical expertise to set up and use effectively

- Limited support for testing non-web applications


Tool #2: Appium

Appium is an open-source automation testing tool that is used for testing mobile applications across different platforms such as iOS and Android. It supports a range of programming languages such as Java, Python, Ruby, C#, and more, and allows testers to write and run test scripts on real devices or emulators.

Pros and cons of the tool:

- Pros:

- Open-source and free to use

- Cross-platform mobile testing

- Supports a wide range of programming languages

- Can test on real devices or emulators

- Cons:

- Requires setup of a testing environment

- Requires technical expertise to use effectively


Tool #3: TestComplete

TestComplete is an automated testing tool that supports web, desktop, and mobile applications across multiple platforms. It supports a wide range of programming languages such as JavaScript, Python, VBScript, and more. TestComplete offers a comprehensive set of features, including automated functional testing, performance testing, and test management.

Pros and cons of the tool:

- Pros:

- Supports a wide range of application types

- User-friendly and easy to use

- Offers comprehensive features for functional and performance testing

- Provides built-in reporting and analysis tools

- Cons:

- Limited support for testing non-Windows applications

- No built-in support for mobile application testing


Tool #4: Katalon Studio

Katalon Studio is an all-in-one automation testing solution that supports web, API, mobile, and desktop applications across multiple platforms. It supports a range of programming languages such as Java, Groovy, and more. Katalon Studio offers a user-friendly interface that allows testers to easily create and execute automated tests.

Pros and cons of the tool:

- Pros:

- Offers a wide range of automation testing capabilities

- User-friendly and easy to use

- Provides built-in reporting and analysis tools

- Supports data-driven and keyword-driven testing

- Cons:

- Limited support for testing non-HTTP/HTTPS applications

- Limited customization options for advanced users


Tool #5: Ranorex Studio

Ranorex Studio is an all-in-one automated testing tool that supports web, mobile, and desktop applications across multiple platforms. It supports a range of programming languages such as C#, VB.NET, and more. Ranorex Studio offers a comprehensive set of features, including UI testing, functional testing, and regression testing.

Pros and cons of the tool:

- Pros:

- Offers a comprehensive set of features for automation testing

- User-friendly and easy to use

- Provides built-in reporting and analysis tools

- Offers robust object recognition and object repository management

- Cons:

- Limited support for testing non-Windows applications

- Can be expensive for small businesses or individual users

How to Design an Effective Performance Testing Strategy for Web Applications?

 Introduction

Web applications are critical to businesses of all sizes, but their performance can have a significant impact on user experience, customer satisfaction, and the bottom line. Performance testing is essential to ensure that web applications can handle the expected traffic and load without slowing down or crashing. In this blog post, we will discuss how to design an effective performance testing strategy for web applications.

1. Define the Objectives of Your Performance Testing Strategy

The first step in designing an effective performance testing strategy is to define your objectives. You need to determine what you want to achieve through performance testing. Common objectives include ensuring that the application can handle a specific number of concurrent users, minimizing page load times, or verifying that response times meet user expectations. It's important to set specific and measurable objectives that align with your business goals.


2. Identify the Performance Metrics You Will Measure

The second step in designing an effective performance testing strategy is to identify the performance metrics you will measure. Performance metrics provide insight into the behavior of your web application under different load conditions. Common performance metrics include response time, throughput, error rate, and resource utilization. It's important to select performance metrics that align with your objectives and are meaningful to your stakeholders.


3. Determine Your Testing Environment

The third step in designing an effective performance testing strategy is to determine your testing environment. Your testing environment should closely resemble your production environment in terms of hardware, software, and network configuration. This will help ensure that your performance test results are accurate and representative of real-world conditions. If you cannot replicate your production environment, you should aim to create a testing environment that is as close as possible.


4. Select the Right Tools

The fourth step in designing an effective performance testing strategy is to select the right performance testing tools. There are many performance testing tools available, ranging from open-source tools to commercial products. When selecting performance testing tools, consider factors such as ease of use, scalability, and support for your testing environment. It's important to choose tools that align with your objectives and provide accurate and actionable results.


5. Design Your Test Scenarios

The fifth step in designing an effective performance testing strategy is to design your test scenarios. A test scenario is a series of steps that simulate user behavior on your web application. When designing test scenarios, consider factors such as the number of concurrent users, the actions to simulate, and the duration of the test. It's important to design test scenarios that align with your objectives and performance metrics.


6. Execute Your Tests

The sixth step in designing an effective performance testing strategy is to execute your tests. When executing tests, you should simulate various load conditions to determine how your web application performs under different scenarios. You should also monitor performance metrics to identify any issues that may arise. It's important to execute tests in a controlled and repeatable manner to ensure accurate and consistent results.


7. Analyse the Results and Optimize Performance

The seventh step in designing an effective performance testing strategy is to analyze the results and optimize performance. Once you have executed your tests and collected data, you should analyze the results to identify any bottlenecks or performance issues. You should then optimize your web application to address any issues that arise. This may involve adjusting server settings, optimizing code, or scaling infrastructure.


Conclusion

In conclusion, designing an effective performance testing strategy is critical to ensuring the performance and scalability of your web application. By following these steps, you can design a performance testing strategy that aligns with your objectives, measures the right performance metrics, and provides actionable results. With an effective performance testing strategy, you can ensure that your web application delivers an optimal user experience and meets your business goals.



ChatGPT-powered Continuous Testing: Improving Software Quality and Speed

 Software development is a complex process that requires a variety of testing methods to ensure that the final product is of high quality and meets the needs of the users. Continuous testing is a practice that involves testing software during the development process, rather than waiting until the end. This approach helps to identify and fix defects early on, which can save time and money in the long run. However, traditional methods of continuous testing can be time-consuming and resource-intensive. This is where ChatGPT, the powerful language model developed by OpenAI, comes in.


ChatGPT is a natural language processing (NLP) model that can be used to generate test cases, automate test execution, and provide real-time feedback. This can help to improve the efficiency and effectiveness of continuous testing, which can lead to better software quality and faster delivery times.


One of the key benefits of using ChatGPT for continuous testing is its ability to generate test cases. The model can be trained on a set of software requirements and can then be used to identify potential edge cases and test scenarios. This can help to ensure that all potential scenarios are covered, which can reduce the risk of defects in the final product. ChatGPT can also be used to generate test cases that are tailored to the specific features and functionality of the software, which can help to improve test coverage.


Another benefit of using ChatGPT for continuous testing is its ability to automate test execution. The model can be used to generate automated test scripts, which can be executed by a testing tool. 

This can help to reduce the time and resources required for manual testing and can increase the speed of software development. ChatGPT can also be used to report on test results, which can help to identify areas for improvement in the software development process.


In addition to the above, ChatGPT can also be used to analyze test data and identify trends that can improve the software development process. The model can be trained on historical test data and can be used to identify patterns that can lead to defects. This can help to improve the quality of the software by identifying areas that are at risk of defects and addressing them early on.


In conclusion, ChatGPT is a powerful language model that can be used to improve the efficiency and effectiveness of continuous testing. Its ability to generate test cases, automate test execution, and provide real-time feedback can help to improve software quality and speed up delivery times. ChatGPT can also be used to analyze test data and identify trends that can improve the software development process. This makes ChatGPT-powered Continuous testing a promising solution for software development teams to improve the quality of the product and accelerate the delivery.

"KLOC and Test Case Prioritisation in Agile Development"

Introduction:: Test case prioritization is a critical aspect of software testing, especially in agile development environments where speed and flexibility are key. Agile development methodologies such as Scrum and Kanban place a strong emphasis on delivering value to customers quickly and continuously, and this requires a focus on testing that is both efficient and effective. One key metric that can be used to inform test case prioritization is lines of code, or KLOC. In this blog post, we will explore the role of KLOC in test case prioritization in agile development, and how it can be used to optimize testing efforts and improve the overall quality of software. By focusing testing efforts on areas of the codebase with higher KLOC, teams can prioritize test cases that are most likely to uncover critical issues and improve the overall quality of the software.

Understanding KLOC:

KLOC, or lines of code, is a measure of the size of a software system. It is typically calculated by counting the number of lines of code in the system, and is often used as a way to estimate the complexity and effort required to test a system. In this section, we will take a closer look at KLOC and how it can be used to inform test case prioritization in agile development. To calculate KLOC, a simple line counting tool or script can be used. The tool or script counts the number of lines of code in the identified codebase and then divides the total number of lines of code by 1000 to get the KLOC value. This will give you the number of thousands of lines of code in the codebase.


One of the main advantages of using KLOC as a metric for test case prioritization is that it is relatively simple to calculate and understand. Unlike other metrics such as cyclomatic complexity, which can be more difficult to interpret and apply, KLOC is a straightforward measure of the size of the codebase. This makes it easy to communicate to stakeholders and team members, and to use as a basis for test case prioritization. Another advantage of using KLOC as a metric for test case prioritization is that it can be used to balance coverage and efficiency. By focusing testing efforts on areas of the codebase with high KLOC, teams can ensure that they are targeting the areas that are most likely to contain defects, while still being able to cover a significant portion of the codebase. This can help teams to optimize testing efforts and improve the overall quality of the software.


It is important to note that KLOC is a simple metric and has some limitations, it doesn't consider the complexity of the code or the potential defects, and it doesn't include non-code files like documentation or configuration files. It can be used as a starting point for test


Using KLOC for Test Case Prioritization

Once you have calculated the KLOC value for your codebase, it can be used to inform test case prioritization. By identifying areas of the codebase with higher KLOC, you can target testing efforts in the areas that are most likely to contain defects. One way to use KLOC for test case prioritization is to create a list of test cases for each module or feature in the codebase, and then prioritize the test cases based on the KLOC value of the corresponding code. For example, if you have two modules, A and B, with KLOC values of 500 and 1000, respectively, you would prioritize testing for module B first as it has a higher KLOC value and therefore is more likely to contain defects.


Another way to use KLOC for test case prioritization is to create a heatmap of the codebase, where each module or feature is represented by a color or symbol that corresponds to its KLOC value. This can help to quickly identify areas of the codebase that have higher KLOC and therefore require more testing.


It is important to note that while KLOC is a valuable metric for test case prioritization, it should be used in conjunction with other factors such as code complexity, change frequency, and testing history. For example, a module with a high KLOC value but that hasn't been modified in a long time and has been extensively tested in the past, may not require as much testing as a module with a lower KLOC value but has been recently modified and has not been tested as much.


Additionally, it's important to use the same tool and method to measure KLOC consistently over time, this will allow you to track changes in the codebase and adjust test case prioritization accordingly. Also, it's essential to communicate the KLOC results and the test case prioritization plan to the rest of the team, and keep them updated on the progress.

In conclusion, using KLOC as a metric for test case prioritization in agile development can help teams to focus their testing efforts on the most valuable features and functionality, while still being able to deliver


Integrating KLOC into Agile Development Processes

Once you have a good understanding of how KLOC can be used for test case prioritization, the next step is to integrate it into your agile development processes. This can help to ensure that testing efforts are aligned with your agile development goals and objectives, and that you are able to deliver working software to customers on a regular basis. One way to integrate KLOC into agile development processes is to incorporate it into your sprint planning and retrospective meetings. During sprint planning, you can use KLOC to identify areas of the codebase that require more testing and to prioritize test cases accordingly. During retrospective meetings, you can use KLOC to track progress and to identify areas where testing efforts can be improved.


Another way to integrate KLOC into agile development processes is to use it as a key performance indicator (KPI). By tracking KLOC over time, you can get a better understanding of how your codebase is evolving and how it is impacting your testing efforts. This can help you to identify areas of the codebase that are becoming more complex and that require more testing, and to make adjustments accordingly. It's also important to use KLOC in conjunction with other metrics such as code complexity and change frequency. By combining these metrics, you can get a more complete picture of the codebase and can make more informed decisions about testing.


Additionally, it's important to consider the testing environment, and make sure that the team has the necessary resources and tools to measure and track KLOC. This will help to ensure that the team can work effectively and efficiently, and that testing efforts are aligned with the agile


Conclusion:

In conclusion, KLOC, or lines of code, is a valuable metric for test case prioritization in agile development. By measuring the size of the codebase, KLOC can help teams to identify areas of the codebase that are most likely to contain defects, and to focus their testing efforts accordingly. Additionally, KLOC is a simple metric that is easy to understand and communicate, making it a useful tool for test case prioritization in agile development.


However, it's important to note that KLOC is a simple metric and has some limitations. It doesn't consider the complexity of the code or the potential defects, and it doesn't include non-code files like documentation or configuration files. Therefore, it should be used in conjunction with other factors such as code complexity, change frequency, and testing history. To integrate KLOC into agile development processes, it is important to incorporate it into sprint planning and retrospective meetings, and use it as a key performance indicator (KPI). By tracking KLOC over time, teams can get a better understanding of how the codebase is evolving and how it is impacting their testing efforts.


Finally, it's important to use the same tool and method to measure KLOC consistently over time, this will allow teams to track changes in the codebase and adjust test case prioritization accordingly. Also, it's essential to communicate the KLOC results and the test case prioritization plan to the rest of the team, and keep them updated on the progress. By following these best practices, teams can use KLOC as a powerful tool for test case prioritization in agile development and deliver better quality software to customers.

How to Identify and Prevent Common Performance Issues: A Guide for Non-Technical Users:

Introduction

Website performance is crucial for both user experience and search engine optimization. Slow loading times and frequent crashes can lead to high bounce rates and poor search engine rankings. As a website owner, it's important to be aware of common performance issues and how to prevent them.

In this guide, we will focus on three common performance issues: memory leaks, thread deadlocks, and database contention. These issues can be difficult to diagnose and fix, but with the right approach, you can prevent them from happening in the first place. Will try to provide a brief explanation of each issue and then offer tips for identifying and preventing them. By the end of this guide, you will have a better understanding of how to optimize your website's performance and keep it running smoothly.

Please note that this guide is for non-technical users, so we will be using simple language and avoiding technical jargon as much as possible.


Memory Leaks

A memory leak occurs when a program allocates memory for a specific task and then fails to release it after the task is completed. Over time, these small leaks can add up and cause the program to use more and more memory, eventually leading to poor performance or crashes.One of the main symptoms of a memory leak is that the program's memory usage gradually increases over time, even when it's not doing anything. You may also notice that the program becomes slower and less responsive as memory usage increases.

To identify memory leaks, you can use a tool called a memory profiler. These tools can help you track the program's memory usage over time and identify which parts of the code are causing the leaks.Once you've identified the source of the leak, you can start working on preventing it. Here are a few best practices you can follow to avoid memory leaks

  • Be mindful of creating new objects and discarding them when they are no longer needed.
  • Use smart pointers and RAII (Resource Acquisition Is Initialization) techniques to manage memory automatically.
  • Avoid creating cyclic references, which can prevent garbage collection.
  • Use a language that has automatic memory management, such as Java or C#. These languages have built-in mechanisms for managing memory and can help prevent leaks.

Thread DeadLocks

A thread deadlock occurs when two or more threads are blocked and unable to continue execution because each one is waiting for the other to release a resource. This can lead to poor performance or even cause the program to freeze or crash.Thread deadlocks can be difficult to identify, but you may notice that the program becomes unresponsive or that certain tasks take an unusually long time to complete.

To identify thread deadlocks, you can use a tool called a thread profiler. These tools can help you track the program's thread usage over time and identify which parts of the code are causing the deadlocks.Once you've identified the source of the deadlock, you can start working on preventing it. Here are a few best practices you can follow to avoid thread deadlocks:

  • Avoid using nested locks, as they can lead to deadlocks.
  • Use a timeout mechanism for resources that are in high demand.
  • Use the 'try-lock' pattern for resources that may be held for a short period of time.
  • Use a synchronization mechanism, such as semaphores, to control access to shared resources.
  • Use a lock-free data structure to ensure that threads never wait for resources

Database contention

Database contention occurs when multiple processes are trying to access the same resource at the same time, such as a table or a row in a database. This can lead to poor performance, as the database has to work harder to manage the multiple requests.Symptoms of database contention include slow query performance, high CPU usage, and long wait times for database locks.

To identify database contention, you can use a tool called a database profiler. These tools can help you track the database's usage over time and identify which parts of the code are causing the contention.Once you've identified the source of the contention, you can start working on preventing it. Here are a few best practices you can follow to avoid database contention:

  • Use indexes to optimize queries and reduce contention.
  • Use partitioning to spread the load across multiple servers.
  • Avoid using table scans, which can cause contention.
  • Use stored procedures to group related queries and reduce contention.
  • Use a connection pool to limit the number of open connections to the database.

Conclusion

In this guide, we've discussed three common performance issues: memory leaks, thread deadlocks, and database contention. We've provided an overview of what each issue is, how it can affect performance, and tips for identifying and preventing them.By following the best practices outlined in this guide, you can help ensure that your website runs smoothly and avoids performance issues. Remember, that preventing these performance issues requires careful programming and testing, but with the right approach, you can keep your website running at optimal performance.

It's worth mentioning that identifying and preventing these performance issues can be a complex task, especially for non-technical users. If you're having trouble identifying or preventing performance issues on your own, it may be beneficial to hire a developer or consultant with experience in performance optimization to help you.For further learning, you can refer to the additional resources such as books, blogs, and online tutorials. These resources can provide more detailed information on the topics covered in this guide, and can help you take your performance optimization to the next level.


Thank you for reading this guide. I hope you found it helpful and informative.

Jenkins and Jmeter Integration (PART 1)

There are several posts which are already present. I am not creating this post with reference to any of the posts available over internet. Jenkins + Jmeter integration is quite simple, am use this execution by using "Ant" in the below steps
  1. Environmental set up
  2. Configuration set up
  3. Invoking Jmeter from Jenkins which also includes publishing the results at Jenkins
We will discuss each one in phase wise and with initial set/minimal configurations for ease of the understanding how to do achieve this configuration. First we will look at the environment set up, and in the part 2 we will see next 2 steps

Environmental set up:Three things which are most important for this integration is "Ant", "Jmeter", "Jenkins". All the below configuration were subjected to Windows OS.

ANT:Download the ant from apache, and set the path till "bin" folder i.e is
C:\apache-ant-1.8.4-bin\apache-ant-1.8.4\bin;


Jmeter:Download the Jmeter and set the path till Bin folder, i.e
D:\apache-jmeter-2.7\apache-jmeter-2.7\bin;


Jenkins:Download Jenkins from the Jenkins site, and deploy it. You can either deploy in the tomecat etc server or run the command from the command line
Java –jar Jenkins.war

This will deploy automatically, since Jenkins uses embedded servlet container called winstone with in it. Note: If the Jenkins deployment is done with the above steps then Jenkins have to be started always using the above command or after the first installation you can set as windows service.

NOTE: All the above were still depend on Java, make sure that java is installed and set up correctly.








    Jmeter - OS Sampler :

    Jmeter has released version 2.7 with some of the great features and enhancements/improvements in JMS and Web services samplers and with new sampler "OS Sampler". For the reference please go through the below link


    Rather than taking about all enhancements let us see about the new sampler "OS Sampler"
    OS sampler uses the command line command called "DIG" which is an inbuild tool as apackage with most of the linux packages, some of them were listed below.
    1. Bind Tools (Gentoo)
        
    2. Bind-utils (Red Hat and Fedora)
        
    3. Dnsutils (debian)

      DIG is usually used to dig/get the information of DNS servers, mail servers, IP address etc.

    Thanks
    Amar

    Look YourSelf in Mirror

    Hello Readers,
    This is my last post in this year. I Wish you a happy Christmas and New Year Celebrations.

    When I looked at myself in mirror thought "Ho man, One more year is over in my life and what have you done"? This Question started running in my mind in the beginning of December and still continues... Then i started collecting the good and bad things that i have done in this year and also what is the difference that i made in past years compared to this year.
    Ans : I wasted one more year as usual. But always it teaches me a lesson.

    Good things (off course its only for me :)
    1. When i received mail from the person, whom i doesn't know(Martin Havlat) that "my rights to forum has been updated to Moderator" . This mail really boosted my for doing some thing from out of the box.
    2. Lots of appreciations from Manager, Leads, Colleagues etc. off course from my side as well.
    3. Implementing TestLink in my Company. ( this was the toughest job that i have done in my career till now)
    3. Rest of things are normal, this are only special things that i can share.


    Bad things,
    Obviously I dont want to discuss but still,
    1. Implementation of TestLink in my company was so delayed for 'N' number of reasons which i can not discuss and i also feel for it.
    2. Most of the time i felt i should have done many things in a better way.

    You may think why i made this post. There is a reason.

    "Mirror is like two sides of a coin,
    You never knew the go0d and bad Until you realize by seeing it"

    I releasized when i saw in mirror my self and thought of the question "Ho man, One more year is over in my life and what have you done" if this doesn't come to my mind i would never knew whats is happened in the past or this year.

    I am happy for what i did but i also made few great things that i had not done in my past this is what made me difference from the past to current year(2009)

    i am moving a head with more passionate for what i do for coming year.

    Please If your more passionate

    "LOOK YOURSELF IN THE MIRROR and ASK YOURSELF WHAT HAVE YOU MADE DIFFERNENCE"

    Thanks,
    TesterWorld.

    Working with TestLink

    i was excited working on it, i really felt good but, there are also disadvantages/factors that has to be considered and in fact i learnt few things/lessons in implementing (not yet implemented decision left for management) When I started taking this task, we knew that it is hard to implement this, but for the long run this is good, Problem is for a project that have 40000+ test cases which will be really hard to dump into testlink. So if testlink is implemented in a new projects i think this can be fit.

    As for my observations these were the Hurdles/disadvantages as for our project is concerned regarding implementing the Testlink tool.

    1. In order to migrate our test cases completely to Test Link, which takes a lot of time, though the import functionality is given, these can be done only via XML(including custom fields). 2. This restricts only to Pass, fail or blocked status are only available as for as test cases execution/while generating Results.

    Advantages :
    1. Easy in tracking test cases(search with keyword, test case id, version etc)
    2. We can add our custom fields to test cases.
    3. Allocating the work either test case creation/execution/preparing any kind of documents is easy
    4. when a test cases is updated the previous version also can be tracked
    5. We can generate results build wise
    6. Test plans are created for builds and work allocations can be done.
    7. Report, is one of the awesome functionality present in the Test link, it generates reports in desired format like HTML/ CSV /Excel and more over the art of reports is based on the results from the test case executions that were allocated build wise/individual resources/test plan with pass/failed percentage and this can also be generated in graphs.
    8. And the above all is done on the privileges based which is an art of the testlink and i liked this feature much

    This post is made not to use testlink or neither to implement the tool. I wrote this post since we could learn before implemnting the tools a few lessons and i learnt the following lessons
    1. Regardless the features of the tool, we need to look how far is this feasible to the project
    2. Environment and process also has to be considered as well
    3. Time is main factor in implementing such tools in such huge project etc.....

    Off course all the above mentioned lessons were heard that while implementing any kind of tool we need to consider the above points, but today i experienced those lessons and enjoyed in doing such work.

    Thanks,
    TesterWorld
    Update: Atleaset we started to implement for one of the project. yet, there are many challenges to come.

    Testing an Application without documents

    I have taken the Stefan's dashboard just to show how the exploratory testing can be done, as recently i met a person(tester) as our conversion went as follows

    Tester : Do you know anything about Win Runner, QTP
    TesterWorld(ME) : They are automational tools used for checkinging the functionality
    Tester : That answer any one can say who are in testing field. Tell me are you manual / automation tester.

    Our conversation went top to the hill, as he argued that Automation will re place manual testing, I got annoyed with his reasonless argument and I just asked a simple question. Can you test any appliccation that has no functionality/specification document).Tester : No, i cant test. how can we test with out any of the documents.

    This was the answer he gave me, i really pitty for most of the people who are not skilled testers and still acts as if they are senior QA, etc. So this post is for those people who are really intrested and passionate about testing.

    I have taken the Stefans Dash board as it really looks great and made me exciting while i was testing for this post, (http://abouttesting.blogspot.com/2008/06/testing-dashboard.html) . This made me really exciting while seeing the sample screen shot in the above mentioned post and i downloaded the file with out reading the whole post

    First thought regarding the Dashboard :
    when I unzipped it, i found a html file, excel sheet, two images, and one sample screen shot. I opened the HTML file AND FOUND no values are getting populated in dashboard, I was shocked why is this not getting opened, i thought some of the files must have been deleted intensionally.

    Second thought regarding the dashboard :
    After some time later, again i went into the folder and opened the html File in notepad, unfortunately i am poor at programming languages but i gussed the path that was given in html file, that excel sheet has to be placed in C drive and then immediately i did that and again i opened the html file. Hurray it got opened. Offcourse the path can also be changed to as desired location and can be given in the notepad, inorder to make the dashboard to be worked.

    Third thought regarding the dashboard :
    So i tried in doing couple of changes, just by status to pass to fail, changing version and adding comments everythings seems to be cool and working fine, then i used some of diffrent scenarios.
    Sceanrio 1
    Steps to reproduce:
    1. Open the "autteststatus.html" file, and click on the change status button
    2. In the comments give as 'test' and try to save a java script error is populated.

    Scenario 2
    Steps to reproduce:
    1. Open the "autteststatus.html" file, and click on the change status button
    2. Now give the word in double quotes in comments column and save, you can observe that comments column is saved, now again click on the change button you can observe that, there is no data present for the changes you done previously

    Scenario 3
    Steps to reproduce:
    1. Open the "autteststatus.html" file, and click on the change status button
    2. Enter the comments column with html tags like test and save you can find that it is saved and displayed as bold. Now again click on the change button, now ive the name in html tags like and save, changes were saved successfully but text i changed is not visible, now click on change button then you can find that text is present as given but it is not visible after we savee save.

    Scenario 4:
    Steps to reproduce:
    1. Open the "autteststatus.xls" file, and click on the change status button
    2. Give the comments as 'test' note the costumer and version no. now open the html file and observe that starting quote is misssing in the column.
    These were some of the scenarios Please be passioante about the work you do, that will make you rich and wealthy.

    "Automation or Manual testing"

    While this seems to be quite abnormal if you ask in what we want typo to work before coming to the testing carrier. In Hyderabad, the teaching culture of software testing has changed/seems to be changed a little bit like previously it was a struggle between our friends(upcoming testers) that automation means QTP, win runner etc where I used to object that though this is not the means but just to have a regular basis discussion. When I was into this testing carrier I asked a lot of people (An upcoming testers) about these two terms “Manual testing or automation testing”, every time we start we do have a several arguments and in the end we do have nothing but just to smile on each other faces which makes that we are an upcoming testers.
    In my basis Automation is just an added skill to the Tester, but not just that it is the world. Without the manual there is no automation and off course there are vice versa in some cases. Automation itself is not just the tools like “QTP, Win runner, Silk runner” etc. the point is what ever we needed in our regular basis used for testing purpose that reduces time is an automation.. this was the basic of Automation testing.
    I do like to give few examples. Let’s have field something like this

    FIELD ONE




    Conditions : Only 2000 characters will allow.

    In such kind of test cases what would someone will do. I saw few people who started to write 2000 characters in a notepad. Which was a waste of time, in order to save time we do have lots of tools like test data generator. Search for them and generate 2000 characters which would prominently save time and increase the productivity. Such tools help us a lot in increasing productivity. Another small example is WORD/emails where we have a spell checker, which is an automated tool where we correct the wrong spellings.
    Automation is not just the tools like QTP or win runner, Typically one has to be understand that what ever is needed for automating to increase our productivity is an Automation tool.


    NOTE : DEAR UPCOMING TESTER’s This post is especially to increase awareness about the automation or manual. I would really happy if at least one got understood from this post and implement in your daily basis to increase your productivity and don’t forget to leave your comments before you go away from this blog.


    SOME OF THE AWSOME SITE :
    you will never leave once you are visit these as at some point or the other you always need to increase your productivity

    www.testingmentor.com
    www.testersdesk.com

    Working with STAF (part 2)

    Well we have discussed some of the concepts of staf in post WAORKING WITH STAF. NOW we will have some thing interesting interaction by using JMeter.

    Now let us know the concepts of Jmeter inorder to go further.

    JMeter is a open source tool we can download from the site Apache Jmeter.
    Jmeter is a performance tool used for finding the avg. response time, devaition etc parameters like the other tools load runner. The only use of the Jmeter is a open source and also it is a less memory eater when compared to the other tools.

    NOW create a test plan according to the manual given in the Apache Jakarata site site itself, After creating a test plan save the file with desired name. LETs go to STAF now.

    IF for suppose this file is in the remote system. we can execute this file and and the results can be obtained in our systems. INORDER to do this we have to give the trust levels to the remote systems. There are 5 trust levels, according to each trust level the access is differnet for the remote systems. After giving the trust level. Now we can execute the test plan by making a batch file.

    In this Batch file we need the controller where the system we handle load generators where the test case is present, Well, i will make batch file very simple to understand and an idea how to execute the test plan by using the batch file.


    REM set the controllers
    REM set the load generators
    REM set the plan

    SET controller= webserver1
    SET load = webserver2
    SET plan=myplan.jmx

    REM checking the STAF connecting to the remote systems

    FOR %%H IN (%controller% %load%) DO STAF %%H PING PING

    REM executing the test plan using jmeter

    for %%H IN (%load%) Do staf %%H PROCESS START SHELL COMMAND C:\jmeter\bin\jmeter-n.cmd C:\jmeter\bin\myplan.jmx

    REM copying the results from the remote system to our system

    for %%H IN (%load%) Do staf %%H fs copy directory c:\jmeter\bin todirectory c:\logs\ tomachine %controller% EXT log CASEINSENSITIVE


    This is a small batch file where we can use staf and make wonders.


    Dear Upcoming Testers...

    Testers can't go away with "click here,click there and see the result" kind of a thing while testing. Testers always need the "DEEP THINKING" while they test. One of the famous example given by Mr.James Bach which i already mentioned in About test cases. Testers always need Deep thinking.I have spoken with some of the Upcoming testers, they are in situation what is the role of a tester after getting into the company ! i don't want to pinpoint saying that "here goes the wrong so you don't know the role of a tester". I meet some of my friends who are trained like me, we are trained in different schools in different arts of styles but final output we get is knowledge. We also have a discussion several times, what is the job/role of testers in a company ? we always have to compromise by saying one thing or the other in our battle.
    And more important most of the upcoming tester think automation means "QTP.LOAD Runner". This is not at all the correct way of thinking, automation includes many concepts to discuss tools and as far as approach. for example spell check in MS Office or in some application can also called as automated way. we have resources and the only thing is we have to improve our knowledge by implementing and sharing it. i got few examples for testers,

    f'ing counting

    interactive puzzle

    The above examples are really awesome and we can also correlate this examples with Testing.

    Working with STAF

    STAF (software testing automation framework) is designed for reusable components called services. STAF is a remote agent used to control the test on various machines.for making easier to create automated test cases and workloads. it works on Windows,Linux,AS/400,MVS.

    Concepts of STAF

    1).STAFProc

    2).STAF services
    a)Internal services are within the STAFProc which means they are always available
    and having the constant names.
    b)External services are outside the STAFProc which means they are always from
    outside for example from the java,c++ etc.
    c)Custom services are always the external services then can also be written in our
    custom and can be plugged into STAF.

    3).Queues and handles

    4).Variables

    5).Security

    6).Submitting Staf requests.

    Configuration file:
    After installing the Staf, in order to get the access to local machines, trust level has to be given and java sdk is needed for running the staf.


    TRUST LEVEL 5 MACHINE tcp://local machine name or ip address.*


    you can find the STAF.CFG file in the bin folder of STAF. You can also alter many features in the configuration file like

    1).Specify the network interfaces

    2).Define operational parameters

    3).Define global variables

    4).Specify the security access

    5).Define startup/shutdown process

    6).Enable and configure tracing

    7).Register and configure external services

    The idea behind STAF is to run a very simple agent on all the machines that participate in the STAF testbed. Every machine can then run services on any other machine, subject to a so-called trust level. In practice, one machine will act as what I called the 'test management' machine, and will coordinate the test runs by sending jobs to the test clients. STAX is one of the services offered on top of the low-level STAF plumbing. It greatly facilitates the distribution of jobs to the test clients and the collection and logging of test results. STAX jobs are XML files spiced up with special tags that contain Python code (actually Jython, but there are no differences for the purpose of this tutorial). This in itself was for us a major reason for choosing STAF over other solutions.

    Common mistakes of software developers (cont.)

    Some of the common problems are listed:

    MY PROBLEM IS DIFFERENT

    Many designers and programmers refuse to listen to the experiences of others, claiming that their application is different, and of course much more complicated. Designers should be more open-minded about the similarities in their work. In response, ask “what is different in the LCD display software in a cellular phone versus one on a temperature controller? Are they really different?” Comparing control and communication systems side-by-side, both are characterized by modules that have inputs and outputs, with a function that maps the input to the output. A 256 by 256 image processed by a algorithm might not be very different from graphical code for a LCD dot matrix display of size 320 by 200. Furthermore, both use hardware with limited memory and processing power relative to the size of the application; both require development of software on a platform other than the target, and many of the issues in developing software for a micro-controller. The timing and volume of data is different. But if the system is designed correctly, these are just variables in equations. Methods to analyze resources such as memory and processing time are the same, both may require similar real-time scheduling, and both may also have high-speed interrupt handlers that can cause priority inversion. Perhaps if control systems and communication systems are similar, so are two different control applications or two different communication systems. Every application is unique, but more often than not the procedure to specify, designs, and build the software is the same. Embedded software designers should learn as much as possible from the experiences of others, and not shrug off experience just because it was acquired in a different application area.


    Large if-then-else and case statements

    It is not uncommon to see large if-else statements or case statements in embedded code. These are problematic from three perspectives

    1.) They are extremely difficult to test, because code ends up having so many different paths. If statements are nested it becomes even more complicated.

    2.). The difference between best-case and worst-case execution time becomes
    significant. This leads either to under-utilizing the CPU, or possibilities of timing errors when the longest path is taken.

    3.) The difficulty of structure code coverage testing grows exponentially with the number of branches, thus branches should be minimized.

    This example confuses new testers who lack in programming experience.
    Developers think their code is always correct and as mentioned earlier 99 % errors are corrected by themselves and remaining 1%.errors will be found out by testers. In the below example the

    IF (0 < x < 12) then
    SYSTEM.OUT.PRINTLN (“Month is” & i);
    Else
    SYSTEM.OUT.PRINTLN (“Invalid input”);


    Consider how this code could fail. Here are some of the simple programming errors that are very common mistakes that can go wrong:

    a) Suppose the programmers said less than or equals instead of less than. The program would eject 0 as bad character. The only way to catch the error I by testing with 0.

    b) If the code is written as less than 12 instead of less than or equal to 12, the program would go wrong.


    “Testers with just the four boundary characters, /, 0, 9, and: will reveal every classification
    Error that the programmer could make by getting an inequality wrong or by mistyping”

    Error Handling:

    Errors in dealing with errors are common. Error handling errors include failure to anticipate the possibility of errors and protect against them, failure to notice error conditions, and failure to deal with a detected error in a reasonable way. Many programmers correctly detect errors but then branch into untested error recovery routines. These routines’ bugs can cause more damage than the original problem.

    Some times the errors are more even large while executing the tests and the Microsoft’s worst scenario is we can’t copy the error messages. There are some tools for copying the text of such error messages and also we can take the screen shots.


    Conclusion:


    Testers should have a common sense in smelling the non obvious things. There is a relation between developers and testers in fixing the bugs. The probability of fixing the bugs is always depends on the way the test cases are communicated. Test cases play a major role in the QA's life. Test Cases are written in such a way that they are traceable, self contained and should not be duplicated in preparing they should always be atomic.



    "Why go into something to test the waters? Go into it to make waves"





    HAPPY TESTING

    Common mistakes of Software developers

    Introduction:

    Most Software developers are not even aware that there favorites methods are problematic. Quite often experts are self thought, hence they tend to have the same bad habit as when they first began, usually because they never witnessed the better ways of performing their embedded systems. These experts then train novices who subsequently acquire the same bad habits. The purpose of this presentation is to improve the awareness to common problems, and to provide a start towards eliminating mistakes and thus creating software that is more reliable and easier to maintain.

    It is easy for spending a million on testing a program. Common estimation of the cost of finding and fixing the errors in program range from 40% to 80% of total development cost. Companies don’t spend this kind of money to “verify that a program works”. They spend it because the program doesn’t work, it has bugs and they want them found. No matter what development methodology they follow, their programs still end up with bugs. Beizer’s (1990) review estimation the average number of errors in program released to testing at 1 to 3 bugs per 100 executable statements. There are big differences between programmers, but no one’s work is error-free.

    One error per 100 statements is an estimate of public bugs; the ones still left in program after the programmer declares it error-free Beizer (1984) reported his private bug rate, how many mistakes he made in designing and coding a program, as 1.5 errors per executable statement. This includes all mistakes including typing errors.

    “At this rate, if your programming language allows one executable statements per lines, you make 150 errors while writing a 100 lines program.”

    Most programmers catch and fix more than 99% of their mistake before releasing a program for testing. Having found so many, no wonder they think they most found a lot. But they haven’t .Tester’s job is to find the remaining 1%.

    Correcting just one of the mistakes within a project can lead to week or months of savings in manpower (especially during the maintenance place of a software life cycle.).

    Mail from Mr. Pradeep Soundararajan

    For a long time i wasn't unable to update my blog after recieving the mail from Mr.Pradeep Soundararajan.


    Hi Amardeep,

    I am sure you know at least a little about me and you might not need my introduction. I found my blog linked to yours and as it seemed to be on testing, I did want to peruse it.

    I was impressed by the fact that you linked people James and Dr Cem Kaner with respect of Mr. I intended to make you a few suggestions to help you write better posts.

    1. The copy paste stuff never makes someone read your blog, since there are so many copy pasted stuff. All people might want to read is your experience, be it little or whatever or your day to day testing activities, the problem you face, the problems you solve, the testers you meet and lots more.
    2. You might also want to read this: http://testertested.blogspot.com/2006/11/indian-testing-community-start.html

    Best wishes and Happy Testing!



    I feel really good after recieving the mail from
    Mr.Pradeep Soundararajan. I always want to write my own experiences and thoughts after reading his mail. i will do my best in improving my blog and skills.


    DEAR UPCOMING TESTERS : this mail is not only for me, but for all of you who just want to become the stars in TESTING INDUSTRY

    why do software fails !

    Generally speaking, there are bugs in software because of unclear requirements, software complexity, programming errors, changes in requirements, errors made in bug tracking, time pressure, poorly documented code and/or bugs in tools used in software development.

    * There are unclear software requirements because there is miscommunication as what the software should or shouldn't do.

    * Software complexity. All of the followings contribute to the exponential growth in software and system complexity.

    * Programming errors occur because programmers and software engineers, like everyone else, can make mistakes.

    * As to changing requirements, in some fast-changing business environments, continuously modified requirements are a fact of life. Sometimes customers do not understand the effects of changes, or understand them but request them anyway. And the changes require redesign of the software, rescheduling of resources and some of the work already completed have to be redone or discarded and hardware requirements can be effected, too.

    * Bug tracking can result in errors because the complexity of keeping track of changes can result in errors, too.

    * Time pressures can cause problems, because scheduling of software projects is not easy and it often requires a lot of guesswork and when deadlines loom and the crunch comes, mistakes will be made.

    * Code documentation is tough to maintain and it is also tough to modify code that is poorly documented. The result is bugs. Sometimes there is no incentive for programmers and software engineers to document their code and write clearly documented, understandable code. Sometimes developers or programmers feel they cannot have job security if everyone can understand the code they write, or they believe if the code was hard to write, it should be hard to read.

    * Software development tools, including visual tools, class libraries, compilers, scripting tools, can introduce their own bugs. Other times the tools are poorly documented, which can create additional bugs.