Arun Francis's Posts (7)

5 top non-technical mistakes made by programmers

There are two sets of skills that a good software developer needs to cultivate: technical skills, and non technical skills. Unfortunately some developers focus only on the technical part. When this happens they will develop some bad habits from which the following are the 5 top non-technical mistakes:

1.- Lack of discipline.
“Discipline is the bridge between goals and accomplishment.” Jim Rohn.

I’ve always thought that discipline is one of the most valuable skills, not only for being a software developer, but to be successful at any other area in life It is also true that usually it is very hard to find people who are both brilliant and disciplined.

Steve Pavlina highlight the 5 pillars of self discipline… “[...]Acceptance, Willpower, Hard Work, Industry, and Persistence. If you take the first letter of each word, you get the acronym “A WHIP” — a convenient way to remember them, since many people associate self-discipline with whipping themselves into shape.[...]” I highly recommend to read his series of articles in self discipline.

My personal approach is to follow these steps every day.

  • Have your own to do list for the day
  • Do one thing at a time
  • Do it right
  • Don’t finish something until it’s completely done
  • Better late than sorry BUT better sorry than never.

2.- Big egos                                                                        

My experience says that big egos and programmers go hand by hand, the main problem of having a big ego is that it actually prevents you from realizing that you have a big ego, a few indicators that may help you to know if your ego is way too big are:

  • You consider yourself the best programmer.
  • You block conversations.
  • You ask for code reviews not to get criticism but to show how good your code is.

3.- Being a bad communicator.


“If I am to speak ten minutes, I need a week for preparation; if fifteen minutes, three days; if half an hour, two days; if an hour, I am ready now“. Woodrow Wilson

As human beings communication is our main activity. Being a good communicator is hard but essential in our profession, we are continuously exchanging opinions about designs, code, having peer reviews, writing documentation, trying to convince someone else that our design is better, writing code…

Good communicators are those people that when they are explaining something, their explanation is:

Focus. They just talk about what is need to be understand.
Clear. Easy to understand.
Concise. Nothing to be added, nothing to be taken.
For being a better communicator, I have two advices:

If you think you are not a good communicator, prepare what you are going to say until is focus, clear and concise.
If engaged in a conversation, first listen, then think and then talk.

4.- Forgetting about the customer.
“If we don’t take care of the customer… somebody else will.”

You are there just for one reason- your customer. It’s easy sometimes to forget this point. I have been in teams where focus was on technologies and platforms rather than in having a happy customer. We would expend months creating a framework that didn’t deliver any value to the customer, and by the time we were about to start using it, we would discover that it won’t fit our user needs.

5.- Not prioritizing the work properly.

Developers who are always gold plating, researching new and more interesting technologies, over engineering solutions or just doing whatever they find more cool are impediments to the project, I’m not saying that is not normal to engage into lateral activities from time to time, we all need distractions, but if you find yourself usually in the previous situations you may reconsider the way you prioritize your work.

Read more…

Five Common Mistakes and Their Solutions :

The dynamically changing IT industry brings forth new objectives and new perspectives for automated testing in areas that were brought to life in the recent decade, such as cloud-based, SaaS applications, e-commerce, and so on. The last five years saw an immense growth in the number of agile and Scrum projects. Additionally, the IT market has changed significantly, not only with various new tools—including Selenium 2, Watir WebDriver, BrowserMob, and Robot Framework—but with approaches that have also completely changed. For example, more focus has been made on cloud-based test automation solutions both for performance testing and functional testing. Cloud-based testing of web applications is now replacing "classical" local deployments of testing tools.

Even though there are a vast number of benefits to automated testing, test automation can often fail. Some types of mistakes in test automation may include: Selecting the wrong automated tool; incorrectly using the tool; or setting the wrong time for test creation. It is also worth to pay special attention to the test automation framework and proper work scope division between the test automation and manual testing teams. The "Test Cases Selection" section of this article highlights many reasons why we must not automate certain test cases. Let’s take a closer look at the five most common mistakes of test automation in agile and their possible solutions.

1. Wrong Tool Selection
Even though the popular tool may contain a commendably rich feature set, and it's price may be affordable, the tool could have hidden problems that are not obvious at first glance. For example, there may be problems like insufficient support for the product, and a lack of reliability. This occurs in both commercial and open source tools.

Solution
When selecting a commercial test automation tool for a specific project, it is not enough to only consider the tool’s features and price; it`s best to analyze feedback and recommendations from people that have successfully used the tool on real projects. When selecting an open source freeware tool, the first thing to consider is the community support, because these tools are supported by their community only, and not by a vendor. The chances to correct arising issues with the tool are much higher if the community is strong. Looking at the number of posts in forums and blogs throughout the web is a good way to assess the actual size of the community. A couple good examples include: stackoverflow.comanswers.launchpad.netwww.qaautomation.net, and many other test automation forums and blogs which can be found by your search engine when you enter the name of the given tool in it.

In order to understand whether a test automation tool was selected properly, you should begin with answering a few questions:

  • Is your tool compatible with the application environment, technologies, and interfaces?
  • What is the cost of your chosen test automation tool?
  • How easy it is to write, execute, and maintain test scripts?
  • Is it possible to extend the tool with additional components?
  • How fast can a person learn the scripting language used by the tool?
  • Is your vendor ready to resolve tool-related issues? Is the community support strong enough?
  • How reliable is your test automation tool?

Answering these questions will provide a clear picture of the situation, and may help you to decide whether the advantages of this tool's usage outweigh the possible disadvantages.

2. Starting at the Wrong Time
It is a common mistake to begin test automation development too early, because the benefits almost never justify the losses of efforts for redevelopment of test automation scripts after the functionality of the application changes until the end of iteration. This is a particularly serious issue for GUI (Graphical User Interface) test automation, because it is much more likely that GUI automation scripts will be broken by development than any other types of automated tests, including like unit tests, performance tests, and API tests. Unfortunately, even after finishing the design phase you may still not know all the necessary technical details of the implementation, because the actual realization of the design selected could be achieved in a number of different ways. For GUI tests, technical details of the implementation always matter. Starting automation early may result in spending repeatable and meaningless efforts on redevelopment of the automated tests.

Solution
During the development phase, members of a Quality Assurance (QA) team should spend more time creating detailed manual test cases suitable for the test automation. If the manual test cases are detailed enough, they can be automated successfully after completion of the given feature. Of course, it`s not a bad idea to write automated tests earlier, but only in cases where you are 100 percent confident that further development within the current iteration will not disrupt your new tests.

3. Framework
Do you know what’s wrong with the traditional agile workflow? It seems to not encourage the inclusion of test automation framework development tasks, because they have zero user points. But it’s not a secret that any good and effective test automation requires both tools and framework. Even if you have already spent several thousands of dollars on a test automation tool, you still need a framework to be developed by your test automation engineers. Test automation framework should always be considered, and its development never underestimated. How does this fit into the agile process? Pretty easily, actually, and it's not as incompatible as it may seem.

Solution
How much time would you need to develop a test automation framework? In most cases it will take no longer than two weeks, which equals the usual agile iteration. Thus, the solution is for you to develop the test automation framework in the very first iteration. You are probably wondering if that means that the product will remain untested, but that is not the case, because it could be tested during the duration of that period. The workload increase on manual testers is probably unavoidable, but there is not much testing done during the first iterations—developers are more focused on backend development, usually covered by unit tests—so the process balances itself. The very first iteration will look like this: Start with both analyzing requirements and designing the test automation framework during the design phase; then develop, debug, and test it until the end of iteration.

Figure 1: Development of test automation framework during the first iteration, focusing on manual testing

The next iteration is back to normal:

Figure 2: Test automation workflow during the next iterations

4. Test Cases Selection
How do we select test cases for automation? That’s an interesting question and grounds for another common mistake–trying to automate all test cases. But automate them all" is hardly an answer if you are focused on quality and efficiency. Following this principle leads to useless efforts and money spent on test automation without bringing any real value to the product.

Solution
There are certain cases where it's better to automate and some cases where it doesn't make much sense to do so; it is the latter that always has the higher priority. You should perform automation when your test case is executed frequently enough—and takes time to run manually, you have a test that will run with different sets of data, or your test case needs to be run under many different platforms and system configurations.

On the other hand, test automation cannot be used for usability testing and the following instances: When the functionality of the application changes frequently; when the expenditures on test automation tools and the support of already existing tests are too high; when test automation doesn't provide enough advantages if compared to manual testing.

5. Test Automation vs. Manual Automation
A lack of coordination between your automated testing and manual testing subteams is another common mistake. This can lead to excessive efforts spent on testing and bad quality software. Why does this happen so often? In most cases, manual testing teams may not have enough technical skills to review automated test cases, so they prefer to hand-off this responsibility to the automated testing teams. This causes a different set of problems, including:

  1. The test automation scripts are not testing what they should, and in the worst case scenario, they are testing something that is not even close to the requirements.
  2. To ensure a successful test, test automation engineers can change test automation scripts to ignore certain verifications.
  3. Automated tests can become unsynchronized with the manual test cases.
  4. Some parts of the application under the test receive double coverage, while others are not covered at all.

Solution
In order to avoid these problems, it’s best to keep your whole QA team centralized and solid. The automated testing subteam should obtain the requirements from the same place as the manual testing subteam. The same list of test cases should be kept and supported for both subteams. Automated test cases should be presented in a format that is easy to understand for non-technical staff. There are many ways to achieve this, including using human-readable scenarios, keyword-driven frameworks, or just keeping the code clean while providing sufficient comments.

Conclusion
I have listed only the most common mistakes that could affect the efficiency of test automation for your project, resulting in its poor quality. It’s wise to pay more attention to the test automation activities, and to consider them an integral part of the quality assurance process of your project. If you take test automation seriously, you will be able to avoid most of the above-mentioned mistakes.

Read more…

Most of the focus in managing software requirements has centered on Users and Developers. It seems obvious to do so, doesn’t it? Users are the ones who will use the project, may also be the ones asking for the project, and thus are typically the ones we focus on in requirements elicitation processes. Developers take those requirements; transform the concepts, rules, models, and statements into code; and produce a solution that hopefully does what was defined at the start of the project.

 

I have heard from our requirements architects and practitioners there is an element missing in this focus, one that is intrinsic to the success of the project: the QA and Testing Teams. Those are the people who, using the requirements, assess whether or not the Developers have delivered what the requirements specified. They are usually given the requirements as they begin their test process, and only rarely are they included earlier in the requirements process.

 

Why should you involve the QA and Testing Teams earlier in the software requirements process? Here are 5 good reasons, learned through painful project experience:

 

  1. Doing so will greatly reduce the time QA spends asking the Business Analysts and Developers “what does this mean?” questions about the requirements, likely leading to tweaking the requirements after Development has done significant work.
  2. It can reduce the time Development will spend on rework, If the requirements do end up being tweaked.
  3. By familiarizing QA and Testing Teams with requirements earlier, there is a better chance that requirements will be clear and consistent. QA often will find ambiguities in the requirements documents, which then can be sent back to Users for clarification.
  4. QA can contribute ideas to ensure requirements will be testable, by suggesting evaluation criteria for each requirement. This way, the whole team has a mechanism by which they can test whether or not the requirements are indeed complete, accurate, and clear. 
  5. Improve the outcomes from testing, by reducing the chances that software will pass QA tests and yet won’t meet the requirements. How is that even possible? Imagine this: the Business Analyst team provides requirements; Developers read—and think they understand—the requirements. The Developers write some tests and code the features; QA tests the features, which pass the tests. Then the solution is presented to the Business Analyst team, at which point the solution is discovered to be incorrect due to requirements ambiguities and misinterpretation.

If you have more (or better) reasons than my five, or if you think including QA and Testing earlier in the requirements process will cause problems, please leave a comment and let me know.

 

Read more…

Friendzz , In todays IT world agile plays an vital role.So I posted a blog reg agile and hw it produces higher quality. Also Scrum coincides with agile since its a part of agile methodology.This agile process suits for all sorts of applications and domains.

 

10. More appropriate distribution of test coverage. Typically, QA gets a big dump of functionality near the end of a release and has to figure out how to best use the time available. That often means that the "most important" new features get very thorough testing and the rest get a "spot check." In an Agile project, test plans and automated tests are created throughout the project and each work item is given the amount of QA resources that is appropriate for it.

9. Because testing is done in the same timeframe as the coding, problems are found earlier.

8. Writing tests early catches requirement and design problems earlier.

7. Because problems are found and fixed faster, there is less chance of the quality of a project being poor for long stretches of time. When there are lots of tests that don’t pass, it is difficult to get accurate feedback on new code. In contrast, code written on a stable base is more likely to be stable itself because there will be accurate and timely feedback on the results of the changes.

6. It is hard to succeed in an Agile environment without automated testing. Automated testing helps to increase the consistency of testing.

5. One effect of short iterations is the evening out of resource demands. That means that testing is done consistently and on a regular basis and there is no need to take shortcuts in contrast to typical development which compresses most of the testing to the end of the process which then requires taking shortcuts due to schedule pressure.

4. More frequent customer input on direction. Part of quality is usability and match of features to needs.

3. More frequent customer input on results. Customers are the ultimate arbiter of quality and their level of expectation is often different than you expect.

2. The Development and QA organizations must be integrated for Agile success. Integrated development and QA is far better than the typical “separation of church and state”.

1. Especially when doing one piece flow, there is significantly more opportunity to detect process problems, diagnose them, try corrective action, and gauge the results of the corrective action.

Thanks :) 

Read more…

Android or Iphone ! Which is Better...

   

Top things Android does better than Iphone OS

 

->  Android can run multiple applications @ same time - Multitasking

->  Android s customizable

->  Integration with google applications

->  Free turn-by-turn navigation

->  Voice to text.

->  Android keeps information visible on Ur home screen.

->  Android has a better app market

->  Android gives U better notifications

->  Android lets U choose Ur Hardware

->  Android lets U choose Ur carrier

->  Android lets U install custom ROMs

->  Android lets U change Ur settings faster

->  Android does google and social integration

->  Android gives U more options to fit Ur budget..

->  You don’t need iTunes to activate your Android

->  Application Freedom

Read more…

Top 10 Negative Test Cases

Negative test cases are designed to test the software in ways it was not intended to be used, and should be a part of your testing effort. Below are the top 10 negative test cases you should consider when designing your test effort:

  • 1. Embedded Single Quote
    Most SQL based database systems have issues when users store information that contain a single quote (e.g. John’s car). For each screen that accepts alphanumeric data entry, try entering text that contains one or more single quotes.
  • 2. Required Data Entry
    Your functional specification should clearly indicate fields that require data entry on screens. Test each field on the screen that has been indicated as being required to ensure it forces you to enter data in the field.
  • 3. Field Type Test
    Your functional specification should clearly indicate fields that require specific data entry requirements (date fields, numeric fields, phone numbers, zip codes, etc). Test each field on the screen that has been indicated as having special types to ensure it forces you to enter data in the correct format based on the field type (numeric fields should not allow alphabetic or special characters, date fields should require a valid date, etc).
  • 4. Field Size Test
    Your functional specification should clearly indicate the number of characters you can enter into a field (for example, the first name must be 50 or less characters). Write test cases to ensure that you can only enter the specified number of characters. Preventing the user from entering more characters than is allowed is more elegant than giving an error message after they have already entered too many characters.
  • 5. Numeric Bounds Test
    For numeric fields, it is important to test for lower and upper bounds. For example, if you are calculating interest charged to an account, you would never have a negative interest amount applied to an account that earns interest, therefore, you should try testing it with a negative number. Likewise, if your functional specification requires that a field be in a specific range (e.g. from 10 to 50), you should try entering 9 or 51, it should fail with a graceful message.
  • 6. Numeric Limits Test
    Most database systems and programming languages allow numeric items to be identified as integers or long integers. Normally, an integer has a range of -32,767 to 32,767 and long integers can range from -2,147,483,648 to 2,147,483,647. For numeric data entry that do not have specified bounds limits, work with these limits to ensure that it does not get an numeric overflow error.
  • 7. Date Bounds Test
    For date fields, it is important to test for lower and upper bounds. For example, if you are checking a birth date field, it is probably a good bet that the person’s birth date is no older than 150 years ago. Likewise, their birth date should not be a date in the future.
  • 8. Date Validity
    For date fields, it is important to ensure that invalid dates are not allowed. Your test cases should also check for leap years (every 4th and 400th year is a leap year).
  • 9. Web Session Testing
    Many web applications rely on the browser session to keep track of the person logged in, settings for the application, etc. Most screens in a web application are not designed to be launched without first logging in. Create test cases to launch web pages within the application without first logging in. The web application should ensure it has a valid logged in session before rendering pages within the application.
  • 10. Performance Changes
    As you release new versions of your product, you should have a set of performance tests that you run that identify the speed of your screens (screens that list information, screens that add/update/delete data, etc). Your test suite should include test cases that compare the prior release performance statistics to the current release. This can aid in identifying potential performance problems that will be manifested with code changes to the current release.


 

Read more…

MOBILE APPLICATION TESTING CHECKLIST -- I Phone and Android applications

 (Written on my own) 

 

1.) Testing mobile applications through
i) Devices.
ii) I phone -- Simulator
iii) Android -- Emulator

 

2.) Installation & Uninstallation Testing

 

3.) Few Security things if the application is a social networking application or links to a social networking applications like facebook, twitter and LinkedIn etc...

 

4.) Inner functionality -- Functional testing

 

5.) System Crash / Force Close

 

Performance & Stress Testing
6.) -- Cosmetic issues(look and feel)

 

7.) Page scrolling

 

8.) Navigation to screens

 

9.) Truncation errors

 

10.) Data testing ( Contents)

 

11.) Performance -- application and inner pages load time

 

12.) Network Testing: (if the appl is a Network based appl)
1.) Verify the behavior of application when there is Network problem and user is performing operations for data call.
2.) User should get proper error message like “Network error. Please try after some time”

 


13.) Application Specific Testing (ie Application behavior Testing based on the Mobile Device used)/ Some Device specific Testing for the Application

 

14.) Application Side Effects:
1.) Make sure that your application is not causing other applications of device to hamper.
2.) Installed application should not cause other applications of device to hamper.

Some more common checklist for both android and Iphone, that needed to be tested in all apps.

1.) System Crash / Force Close

2.) Performance/memory testing

3.) Check with different networks -- WiFi and 3G,4G .also field and flight based(if needed).

4.) Check Installation -- Install the application being tested.

5.) Check Application start/stop behavior -- Start the application by selecting the icon or following the steps outlined in the submission statement

6.)Check if No disruption to voice calls -- With the application installed and running use a second phone to call the test device.

7.)Check if No disruption to text messages -- With the application installed and running, send a text message to the test device.

8.) Check for Auto-start behavior -- With the application running, find the settings for the application — either within the application itself or from the settings option on the device.

9.) Check for Multitasking -- No disruption to key device applications

10.) Check for navigations, tabs,page scrolling etc..

11.) Check for social networking options such as sharing ,posting and links etc..

12.) Memory testing -- check the memory by filling and emptying it ,then compare the application with it.

13.)  Check if any payment gateway occurs like paypal,chargify etc...

14.)  Check uninstall of apps -- The application must be uninstalled without error. 

Also hav a look @ this: 

--> Perform multitasking or multiprocessing  in mobile application and compare the performance b4 and after this process.

 $ Fill the memory and test whether the application s performing well or not.In some cases if the memory is full then the application works slowly and zero performance exists.

$ Test the load time in application inner and outer areas.., means that interior page loading time (articles,images).

$ Test the page navigations simultaneously.

$Test in different devices for the best performance.

-Nexus One 2.2+

- Moto Droid X (2.1)

- Samsung Galaxy S (2.1)

- Nexus One (2.1)

- HTC Desire (2.1)

- SE Xperia X10i (1.6)

- Moto Droid (2.0)

- HTC Magic (1.5)

 IPhone -- Check for 2g,3g and 4g

$ Test whether Forceclose/sytem crash occurs during the page loading and navigations.In these the application faces lot of Force closes and performance becomes null.

$ Test simultaneously the menus present  in the same page and check the performance.

$ Perform negative test cases for best performance of application ....via both functionality and design

 

Read more…
Welcome to Mobile QA Zone, a Next Generation Software Testing Community.Invite your friends to join this community.Write to us to become a featured member.