The agenda for Innovate has had a few more changes.

Here’s the latest version of the agenda

Agenda 19.09.12 V2

 

Advertisements
Posted by: uktestexpert | September 13, 2012

UPDATED – IBM Rational Innovate UK 2012 agenda

Seems the agenda released yesterday was incorrect.

Here is the correct version with updated timings.

Agenda 13.9.12 V3

Posted by: uktestexpert | September 12, 2012

Innovate 2012 Agenda

The agenda for Innovate 2012 has been released.

More details on the attached pdf.

agenda 12.9.12 v2

Posted by: uktestexpert | August 23, 2012

Sizing your performance test load…

Just lately, I have had a number of requests to provide hardware sizings for Rational Performance Tester (RPT) along the line of this spec should support “x” users of “y” protocol.

Sounds relatively simple?

The issue around sizing is that frequently we don’t know the number of users, the type of protocol being tested, the transactional throughput or the complexity of the application being tested. All of these will play a factor in the number of load injector machines required to generate the load.

The following isn’t an answer; it just describes the type of things that should be considered.

Depending on the protocol being tested, the footprint of the ROT virtual tester will vary. For example, HTTP is likely to be somewhere in the range of 1 – 5MB, other protocols will be more. The reason for such a wide range is that it will depend on the level of logging that is taking place by the performance tool and the size of the pages being downloaded by the virtual user, for instance, a page containing lots of large images will increase the virtual testers footprint compared to a basic page with some lightweight HTML content.

Based on the rough estimate of 5MB per virtual user, if we assume a typical load injector machine will have 2GB RAM, we need to allow 400 – 500MB for the operating system any software that may be running on it. This leaves 1.5GB remaining. Typical recommendations are that the CPU and memory should not exceed 70% – 80% usage as the injector machine should be able to comfortably generate the load; if the injector request is overloaded it will struggle and skew any results.

The same is also true for the network card, if the number of requests exceed what can physically be placed onto the network, then requests will start stacking up and the network card will be the cause of a bottleneck, again skewing the results.

Based on 70% usage, this allows 1050MB to be used for the generation of the virtual users. Based on the assumption of 5MB per user; this works out at approximately 210 virtual users on this per injector machine.

This example is very much on the overdramatic side and is purely to highlight what needs to be considered. In reality, this machine may be able to generate in excess of 700 users. The best approach is to derive a realistic figure based on knowledge of the application being tested and fine tune this estimate as part of a sizing exercise

This is a real HTTP example…

Usually RPT uses less than 2MB. It is safe to calculate with 0.5MB
– 8 x SuSE Linux Enterprise 9 each with 2 Xeons 3.0GHz and 4GB RAM
– With 4500 users we achieved 70 transactions per second at all. The load was below 25% on each agent.
– With 2000 users the load was below 10%.

SOA and Siebel should be similar as they are also based on the HTTP protocol. Citrix and SAP are more costly as in order to generate the protocol conversation and interpret the results, RPT starts multiple sessions of the host application, and this is comparable to starting 50 Citrix instances on a single machine. We normally estimate 50 – 70 user per box in these situations.

Hope this information is useful.

Posted by: Anthony Kesterton | August 20, 2012

Innovate UK – 23rd October 2012

Plans are in hand for the 2012 Innovate UK conference.  We are back at the St Paul’s Grange Hotel in London, with a lineup that covers Mobile development, ALM, Desgin/Development/Deployment and a Systems track.  We also have a special track focused on the business of IT.

More information available here

Date: August 9th , 2012 Time: 9:30 AM – 4:00 PM Location: IBM Hursley, Winchester

This is your opportunity to learn about Rational Quality Manager’s features and functionality and then take it for a hands-on test drive. IBM® Rational® Quality Manager is a web-based, centralized test management hub for business-driven software quality.

This session is offered free of charge. Complimentary refreshments including continental breakfast and lunch will be provided. However, participants are responsible for their own business travel expenses.

Contact me for details.

Date: August 8th , 2012 Time: 09:30 AM – 4:00 PM Location: IBM Hursley, Winchester

The objective of this session is to demonstrate through an interactive, hands-on experience, the power of visual techniques for requirements definition and management enabled by IBM Rational Requirements Composer, and its integration with IBM Rational RequisitePro. Attendees will learn key concepts and use major capabilities.

This session is offered free of charge. Complimentary refreshments including continental breakfast and lunch will be provided. However, participants are responsible for their own business travel expenses.

Contact me for details.

Date: August 7th , 2012 Time: 09:30 AM – 4:00 PM Location: IBM Hursley, Winchester

The objective of this session is to demonstrate through an interactive, hands-on experience, the power of collaborative development enabled by IBM Rational Team Concert.

Attendees will work together as a team to learn key concepts and use major capabilities.

This session is offered free of charge. Complimentary refreshments including continental breakfast and lunch will be provided. However, participants are responsible for their own business travel expenses.

Contact me for details.

Posted by: uktestexpert | July 26, 2012

Manual Testing in an Agile Environment

One of my fellow members at the Ministry of Testing (Matt Archer) has shared this mind map. It is a bit of a gem. It’s big and full of useful advice for any tester working in an agile and manual testing environment environment.

Also, there is a useful checklist (copied below) that has been compiled here.

1. Common challenges experienced by manual testers in agile teams

1.1 The sprint comes to an end, but testing is not yet finished

1.1.1 Common Causes

1.1.1.1 Test execution started too late in the sprint
  • 1.1.1.1.1 Watch out for the “mini-waterfall” sprint
  • 1.1.1.1.2 Stories are too large
  • 1.1.1.1.3 People incorrectly assume testers only want to test a new feature once it’s “finished”
1.1.1.2 Test execution takes too long

1.1.1.2.1 Poor application testability

  • 1.1.1.2.1.1 Difficult to setup system state / pre-conditions
  • 1.1.1.2.1.2 Lack of reporting / logging for diagnostic purposes

1.1.1.2.2 Lack of tools / utilities for manual testing

  • 1.1.1.2.2.1 Log parsers
  • 1.1.1.2.2.2 Data generators
  • 1.1.1.2.2.3 Semi-automated oracles

1.1.1.2.3 Slow / unreliable process for creating test builds

  • 1.1.1.2.3.1 Testers told to get on with it themselves
  • 1.1.1.2.3.2 Minimal tool / automation support
  • 1.1.1.2.3.3 Builds are unusable when they arrive
  • 1.1.1.2.3.3.1 Little / no unit tests

1.1.1.2.4 Bug count allowed to escalate

  • 1.1.1.2.4.1 Unfixed bugs camouflage other bugs
  • 1.1.1.2.4.2 Unfixed bugs lead to duplicate effort
  • 1.1.1.2.4.3 Unfixed bugs distract the entire team
1.1.1.3 Too much time spent on test preparation
  • 1.1.1.3.1 Too documentation heavy
  • 1.1.1.3.2 Analysis paralysis
  • 1.1.1.3.2.1 Yes, it happens to testers too!
  • 1.1.1.3.3 Trying to prepare too far ahead
1.1.1.4 Team velocity based on coding only
  • 1.1.1.4.1 Testing is ignored / forgotten
1.1.1.5 Testers given poor advice
  • 1.1.1.5.1 “Just test the same way you’ve always tested”
  • 1.1.1.5.1.1 Some traditional testing practices are compatible with agile software development
  • 1.1.1.5.1.2 Others less so!

1.2 Testing is finished during the sprint, but confidence is low

1.2.1 Common Causes

1.2.1.1 Testing is quick, but ad-hoc

  • 1.2.1.1.1 Too little planning
  • 1.2.1.1.1.1 Proper planning
  • 1.2.1.1.1.1.1 Not just creating test plans from templates!
  • 1.2.1.1.2 Little thought given to how testing thoroughness and coverage will be measured
  • 1.2.1.1.2.1 Testing finishes when the sprint ends
  • 1.2.1.1.3 Testers given poor advice
  • 1.2.1.1.3.1 Just “sniff” around the areas not covered by the automated tests

1.2.1.2 Scope of the release is unclear

  • 1.2.1.2.1 Change is unstructured and uncontrolled
  • 1.2.1.2.2 Niche / subtle features added without tester’s knowledge

1.2.1.3 Testing performed by people with little testing experience

  • 1.2.1.3.1 Only the obvious bugs discovered
  • 1.2.1.3.2 Testing seen as a background task
  • 1.2.1.3.2.1 Performed on a best endeavours basis

2 Use models to aid rapid test design and keep a record of your tests

2.1 Models exist as part of test design techniques

2.1.1 Examples

  • 2.1.1.1 Boundary Value Analysis
  • 2.1.1.2 State Transition Testing
  • 2.1.1.3 Classification Trees
  • 2.1.2 Created by
  • 2.1.2.1 Testers
  • 2.1.3 Make the model (diagram) you’re testing preparation
  • 2.1.3.1 DON’T explicitly create any tests based on the model
  • 2.1.3.2 DO define the tests you want to run as a coverage target over the model
  • 2.1.3.2.1 “Test that a booking can be moved from every state to every other (valid) state”

2.2 Models exist as part of development and requirement techniques

2.2.1 Examples

  • 2.2.1.1 Activity diagrams
  • 2.2.1.2 Entity relationship diagrams
  • 2.2.1.3 Security matrixes

2.2.2 Created by

  • 2.2.2.1 Other members of the team
  • 2.2.3 make adding a coverage target to someone else’s model your test preparation
  • 2.2.3.1 An extremely fast way to define tests for a sprint
  • 2.2.3.1.1 And get feedback from others
  • 2.2.3.2 “Test all of the security permission described in the security matrix for all public facing roles (both positive and negative cases)”

2.3 Models exist in our minds

2.3.1 Our meta-models of the world around us

  • 2.3.1.1 And the systems we test

2.3.2 Use them to challenge / explore the system being tested and the physical artefacts used to describe them

2.3.3 Stress test your own mental models and the mental models of others using NLP

  • 2.3.3.1 Good source of test ideas for exploratory testing
  • 2.3.3.2 See “NLP for Testers” (Alan Richardson)

3 Consider converting test scripts to checklists

3.1 Test scripts focus on how to interact with the software to test it

  • 3.1.1 Often lengthy to write
  • 3.1.2 Often difficult to maintain

3.2 Checklist focus on what to test about the software and why it’s important

3.2.1 Quick to write

  • 3.2.1.1 As short as 1 line / sentence per test

3.2.2 Quick to maintain

3.3 Different types of checklist

3.3.1 The target of a checklist can vary

3.3.1.1 A feature

  • 3.3.1.1.1 Example
  • 3.3.1.1.1.1 “Account Management”
  • 3.3.1.1.2 Can be used to support the testing of a specific story, feature or function

3.3.1.2 A characteristic / category of features

  • 3.3.1.2.1 Example
  • 3.3.1.2.1.1 “All User Interfaces”
  • 3.3.1.2.2 Can be reused across the entire system
3.3.2 The focus of a checklist can vary
  • 3.3.2.1 Think “types of testing”
  • 3.3.2.2 Examples
  • 3.3.2.2.1 Positive
  • 3.3.2.2.2 Negative
  • 3.3.2.2.3 Functional
  • 3.3.2.2.4 Performance
  • 3.3.2.3 The list is unlimited
3.3.3 Checklist data can vary

3.3.3.1 Implicit

  • 3.3.3.1.1 Person performing the test provides the data in real-time
  • 3.3.3.1.1.1 Slower to execute
  • 3.3.3.1.1.2 But more variety over time

3.3.3.2 Explicit

  • 3.3.3.2.1 Suggestions for data values included in the checklist
  • 3.3.3.2.1.1 Quicker to execute
  • 3.3.3.2.1.2 But beware the repetition

4 Adopting new agile testing practices

4.1 Don’t just chase the buzz-words

  • 4.1.1 You know what they are!

4.2 For every practice you introduce or change, ask yourself…

4.2.1 “Will this help me provide meaningful, quality related feedback, faster… through predominantly manual, human-driven, activities?”

  • 4.2.2 “Am I reducing the wasteful aspect of my testing?”
  • 4.2.2.1 “Or adding more waste!?”

4.3 Avoid vanity metrics

  • 4.3.1 Number of teams using practice X
  • 4.3.2 Number of people who have attended training course Y
  • 4.3.3 Number of team members who hold certification Z
  • 4.3.4 Hours spent between teach members and agile coach

5 When you document, do so succinctly and with pace

5.1 Don’t repeat yourself (DRY)

  • 5.1.1 Do larges pieces of one test look like large pieces of another?

5.2 Do you really need it? (DYRNI)

5.2.1 Do you have too much detail in your tests?

5.2.2 Do you really need it?

5.2.3 Who is it for?

  • 5.2.3.1 You?
  • 5.2.3.2 Another team member?
  • 5.2.3.3 Just in case!?
  • 5.2.3.3.1 Beware (!)

5.3 Don’t get blocked (DGB)

5.3.1 Agile “requirements” are rarely intended to be analysed alone

  • 5.3.1.1 Remember the 3 Cs
  • 5.3.1.1.1 Card
  • 5.3.1.1.2 Conversation (!)
  • 5.3.1.1.3 Confirmation

5.3.2 Don’t allow yourself to get blocked or wonder “what if” for too long

  • 5.3.2.1 Find the correct person to have a conversation with
  • 5.3.2.1.1 Fill the knowledge gap

5.3.3 Create a “need-more-info” annotation that you can use when nobody is around

  • 5.3.3.1 “As a user I would like to be able to import personnel records from other systems”
  • 5.3.3.1.1 “What other system?”
  • 5.3.3.1.2 “I’ll find out later”
  • 5.3.3.2 “Test that the data integrity of a personnel record is maintained when it is importing from”
  • 5.3.3.2.1 “I’ll replace the text between my ‘need-more-info’ annotations (“

5.4.1 Optimise by relying on common information in other locations

  • 5.4.1.1 Wikis / whiteboards / tangible locations
  • 5.4.1.1.1 Test environment settings
  • 5.4.1.1.2 Test tool guides
  • 5.4.1.1.3 Training material
  • 5.4.1.1.4 Username / password vaults
  • 5.4.1.2 Your mind / experience!

6 Why manual testing? Don’t agile projects automate?

6.1 Automated tests can’t cover everything

6.1.1 Usability
6.1.2 Cross browser

  • 6.1.2.1 UI glitches
  • 6.1.2.2 Rendering problems
  • 6.1.2.3 Browsers / operating systems unsupported by automation tool

6.1.3 Style / branding

6.1.4 Mobile technologies

  • 6.1.4.1 Unsupported devices
  • 6.1.4.2 Unsupported interactions

6.2 Great test ideas come from manual interaction, exploration and investigation (i.e. manual testing!)

  • 6.2.1 Maybe these are ultimately automated
  • 6.2.2 Maybe not

6.3 Sometimes no automated tests exist

  • 6.3.1 Lack of desire
  • 6.3.2 Lack of skill
  • 6.3.3 “Lack of time”

6.4 Sometimes test automation coverage is low

  • 6.4.1 Moving from waterfall to agile
  • 6.4.2 The result of one or two hectic sprints
  • 6.4.2.1 Team has been focusing on “the essentials” (!?)

6.5 Sometimes automated tests fail / are unavailable

6.5.1 Too costly to update

6.5.2 Too unreliable to trust

6.5.3 Dependant on an ex-employee

6.5.4 But we still have to go ahead with a…

  • 6.5.4.1 demo
  • 6.5.4.2 release
  • 6.5.4.3 patch / upgrade

7 Things other team members can do to help

7.1 Being an Agile Manual Tester is difficult to do in isolation

  • 7.1.1 Many aspects of agile software development are geared towards collaboration
  • 7.1.2 Without collaboration, every team member feels the pain

7.2 Share everything

7.2.1 Examples
  • 7.2.1.1 Tangible
  • 7.2.1.1.1 Documents
  • 7.2.1.1.2 Repositories
  • 7.2.1.1.3 Notes from meetings

7.2.1.2 Intangible

  • 7.2.1.2.1 Time with the customer
  • 7.2.1.2.2 Solutions to common problems
  • 7.2.1.2.2.1 Setting up a local environment
  • 7.2.1.2.2.2 Educating new team members

7.2.1.2.3 Knowledge, experience and pain-points

7.2.2 An agile acid test

7.2.2.1 “I see we’re working on the same story, can I combine my info with yours?”

  • 7.2.2.1.1 Applies to every role combination
  • 7.2.2.1.1.1 Tester to tester
  • 7.2.2.1.1.2 Tester to developer
  • 7.2.2.1.1.3 Developer to tester
  • 7.2.2.1.1.4 (etc, etc)

7.2.2.2 “No, put it in your own document / repository”

  • 7.2.2.2.1 “Bad” agile

7.2.2.3 “Yes, feel free, would you mind reviewing my info whilst you’re adding yours?”

  • 7.2.2.3.1 “Good” agile

7.3 Improve manual testability

7.3.1 Log / report information that is useful for testers

7.3.2 Hidden “setup” screens

  • 7.3.2.1 For testing purposes only
  • 7.3.2.2 Enter / manipulate system data and state
  • 7.3.2.3 Simulate edge / error cases

7.4 Challenge manual tester

7.4.1 Provide a test build that’s already of high quality

7.4.1.1 Take unit testing seriously

  • 7.4.1.1.1 Make them part of “done”
  • 7.4.1.1.2 Run them regularly
  • 7.4.1.1.2.1 On every check-in

7.4.1.2 Fix bugs as soon as anyone finds them

  • 7.4.1.2.1 See The Testing Planet article, “10 reasons why you fix bugs as soon as you find them” (Matt Archer / Andy Glover)

7.5 Be able to deploy a test build on demand

7.5.1 “I’m ready to test the story you finished this morning that I see you’ve moved to ‘ready-for-testing’”

  • 7.5.1.1 “I’ll get it on the test environment for you by tomorrow”
  • 7.5.1.1.1 “Bad” agile

7.5.1.2 “OK, it’s already checked-in, run the automated build process and you’ll have it on the test environment in 15 minutes”

  • 7.5.1.2.1 “Good” agile

Bit late I know, but this webcast will give a background and description of the IBM Rational Connector for SAP Solution Manager, covering its inception as a custom application for the Blue Harmony project (IBM’s deployment of SAP packaged applications for the IBM back office, the largest single deployment of SAP packaged applications) and its evolution into a commercial product.

Date & Time

Tuesday, 24 July 2012
12:00pm Eastern Time (9:00am Pacific / 5:00pm GMT / 6:00pm CET)

The webcast will be one hour, including the presentation and time for questions.

Registration details here.

« Newer Posts - Older Posts »

Categories