<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/">
  <channel>
    <language>en</language>
    <title>Technology + Creativity at the BBC Feed</title>
    <description>Technology, innovation, engineering, design, development.
The home of the BBC's digital services.</description>
    <pubDate>Mon, 22 Mar 2021 13:39:37 +0000</pubDate>
    <generator>Zend_Feed_Writer 2 (http://framework.zend.com)</generator>
    <link>https://www.bbc.co.uk/blogs/internet</link>
    <atom:link rel="self" type="application/rss+xml" href="https://www.bbc.co.uk/blogs/internet/rss"/>
    <item>
      <title>Quality engineering for a shared codebase</title>
      <description><![CDATA[Some of the approaches required in building a shared codebase for BBC Online.]]></description>
      <pubDate>Mon, 22 Mar 2021 13:39:37 +0000</pubDate>
      <link>https://www.bbc.co.uk/blogs/internet/entries/c3bfeca9-88b5-4930-8be0-d7ca77ac6ea6</link>
      <guid>https://www.bbc.co.uk/blogs/internet/entries/c3bfeca9-88b5-4930-8be0-d7ca77ac6ea6</guid>
      <author>Abigael  Ombaso</author>
      <dc:creator>Abigael  Ombaso</dc:creator>
      <content:encoded><![CDATA[<div class="component">
    <img class="image" src="https://ichef.bbci.co.uk/images/ic/320xn/p09t2l37.jpg" srcset="https://ichef.bbci.co.uk/images/ic/80xn/p09t2l37.jpg 80w, https://ichef.bbci.co.uk/images/ic/160xn/p09t2l37.jpg 160w, https://ichef.bbci.co.uk/images/ic/320xn/p09t2l37.jpg 320w, https://ichef.bbci.co.uk/images/ic/480xn/p09t2l37.jpg 480w, https://ichef.bbci.co.uk/images/ic/640xn/p09t2l37.jpg 640w, https://ichef.bbci.co.uk/images/ic/768xn/p09t2l37.jpg 768w, https://ichef.bbci.co.uk/images/ic/896xn/p09t2l37.jpg 896w, https://ichef.bbci.co.uk/images/ic/1008xn/p09t2l37.jpg 1008w" sizes="(min-width: 63em) 613px, (min-width: 48.125em) 66.666666666667vw, 100vw" alt=""><p><em>The ‘You might have missed’ section showing featured content at the bottom of the BBC homepage</em></p></div>
<div class="component prose">
    <p>The BBC is developing a shared platform for building its digital products to reduce development complexity and duplication as much as possible. The aim being to enable quicker and more efficient software development processes resulting in quicker delivery of digital content to our audiences. <a href="https://www.bbc.co.uk/blogs/internet/entries/8673fe2a-e876-45fc-9a5f-203c049c9f9c">Read more about the technology changes</a>.</p>
<p>A key aspect of this project has been having a shared repository for the Presentation layer code with different teams working on this platform. This blog will be sharing our experiences so far through the lens of quality engineering by answering three commonly occurring questions that pop up before, during, and after product development &mdash; who is going to use the product, how will we ensure quality, and what have we learned so far?</p>
</div>
<div class="component prose">
    <h4>Who will be using the product?</h4>
<p>Engineers across different teams working on the platform directly and our digital products consumers are the main product users. We want to keep making great digital products (quality, usability and design), even as we change technology platforms, while also minimising bugs in our software as much as possible.</p>
<p>In order to reduce bugs and issues raised, the testing is integrated into the development workflow, and team members across disciplines have ownership of the product quality. Having a consistent approach to testing features in the platform and having a quick feedback loop for spotting and fixing defects early, helps in minimising the risks and impact across different teams. This is an ongoing process with fine tuning based on feedback from the development teams.</p>
<p><strong>Solution</strong>: The users&rsquo; needs, (in our case digital products users and engineering teams building on the shared platform) help to define the product requirements that influence the test process.</p>
<h4>How do we do the testing?</h4>
<p>One of the key things Test engineers and other project stakeholders consider are the risks. The impact of different code merges and changes cascading to different teams was one such risk in the shared repo. Having a shared platform meant sharing other infrastructure (besides a GitHub repo), such as deployment pipelines, communication channels in Slack, documentation, etc.</p>
<p>A consequence of this is that deployments are now visible to multiple teams or stakeholders, with the notifications in our Slack channels flagging failing builds. Bugs get flagged up quickly and when needed, different development team members are able to &lsquo;swarm&rsquo; (even while working remotely) to collaboratively debug and resolve these issues. This has led to more frequent and better communication across teams and we think this has been a beneficial and worthwhile project just for getting more people talking and working together more often.</p>
<p>There has been consistency planned into the project as a whole from the start, for example with the <a href="https://medium.com/bbc-design-engineering/the-lessons-learnt-creating-a-design-system-for-bbc-online-38625885870e">Design system</a>. Similarly, it was important to have consistency across teams when it came to testing the features developed in the platform, as we simultaneously worked on this shared code space, in order to minimise bugs, regressions and other product risks. Having an overarching Test strategy considering approaches to manual and automated testing (guided by the <a href="https://en.wikipedia.org/wiki/Test_automation">Test pyramid</a> and <a href="https://kentcdodds.com/blog/write-tests">Testing trophy principles</a>) has informed our testing.</p>
<p>Automated tests form part of pull request checks and before deployment to Live. We have consistency in the automated test tools we use and engineers across teams are able to know what the expectations for testing are. This is by no means a finished endeavour, but a continuous work in progress so having forums like the Test Guild and team knowledge sharinghelp with communication, continuous learning, and further improvements.</p>
<p>Because of the scale of the project we rely on automated test tooling for regression testing. We also began to use fairly new test tools for visual regression testing like Storybook and Chromatic. Alternatives were Percy, Nightwatchjs and Browserstack. For other types of automated tests we use Puppeteer and formerly Cypress. We had communication channels with the test tool makers to feed back issues encountered and to request new features as we scaled and grappled with using the different test tools.</p>
<p><strong>Solution</strong>: Have a test strategy and plan early to mitigate against identified project risks by including quick and early feedback during the development process.</p>
</div>
<div class="component">
    <img class="image" src="https://ichef.bbci.co.uk/images/ic/320xn/p09bgq0n.jpg" srcset="https://ichef.bbci.co.uk/images/ic/80xn/p09bgq0n.jpg 80w, https://ichef.bbci.co.uk/images/ic/160xn/p09bgq0n.jpg 160w, https://ichef.bbci.co.uk/images/ic/320xn/p09bgq0n.jpg 320w, https://ichef.bbci.co.uk/images/ic/480xn/p09bgq0n.jpg 480w, https://ichef.bbci.co.uk/images/ic/640xn/p09bgq0n.jpg 640w, https://ichef.bbci.co.uk/images/ic/768xn/p09bgq0n.jpg 768w, https://ichef.bbci.co.uk/images/ic/896xn/p09bgq0n.jpg 896w, https://ichef.bbci.co.uk/images/ic/1008xn/p09bgq0n.jpg 1008w" sizes="(min-width: 63em) 613px, (min-width: 48.125em) 66.666666666667vw, 100vw" alt=""><p><em>A diagram showing factors influencing quality engineering cycle in the project - strategy and planning, product users, communication, technology and continuous learning.</em></p></div>
<div class="component prose">
    <h4>What have we learned?</h4>
<p>One of the benefits of a brand-new project is that there is no legacy code or technical debt at the start (this changes pretty quickly though!). Mature products have gone through the growth pains. There are known unknowns and workarounds for known problems or pain points which the development teams, (and Test engineers in particular) come to know and understand fairly well.</p>
<p>The challenge however, with new projects, is that there are lots of unknowns with the new technology stack. As the project has been growing we have also been dealing with and learning from the scaling challenges such as pipeline issues from multiple deployments taking place at the same time, improving monitoring of traffic and website status errors, as well as optimising our stack&rsquo;s performance as more product features have been built. Being able to identify such issues early on has been important. Manual testing by different team members helps with identifying such issues that may not be covered by the automated processes initially.</p>
<p><strong>Solution</strong>: Continuously learn and iterate as issues are identified and fixed.</p>
<h4>Conclusion</h4>
<p>Building quality engineering into a shared repository requires similar considerations to that of single-team projects but on a bigger scale and with a wider focus. These considerations are; who is the product being made for and by whom; what are the product risks, and what is the test approach or plan to reduce the impact from these risks. The aim is to provide quick feedback and monitoring for regressions during the software development process. Test automation and tooling are important for facilitating this. Continuously learning about our product, (including from our product users) by regular communication, exploring, and working collaboratively. This has helped with iterating on our quality processes based on our findings and has been important for our quality engineering.</p>
</div>
]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>10x your collaboration on writing tests</title>
      <description><![CDATA[Qambar Raza and Sheila Bhati explain how certain collaborative practices can help improve software testing.]]></description>
      <pubDate>Mon, 03 Feb 2020 11:21:10 +0000</pubDate>
      <link>https://www.bbc.co.uk/blogs/internet/entries/ecedc9af-4bc8-4fb3-9cd3-754a85a1ce19</link>
      <guid>https://www.bbc.co.uk/blogs/internet/entries/ecedc9af-4bc8-4fb3-9cd3-754a85a1ce19</guid>
      <author>Qambar Raza and  Sheila Bhati</author>
      <dc:creator>Qambar Raza and  Sheila Bhati</dc:creator>
      <content:encoded><![CDATA[<div class="component prose">
    <p>Collaboration is the process of two or more people or organisations working together to complete a task or achieve a goal.</p>
<p>There is an African proverb:</p>
<h4>&ldquo;If you want to go fast, go alone. If you want to go far, go together.&rdquo;</h4>
<p>Have you ever noticed any silos in your teams? Have you ever come across an &ldquo;them vs us&rdquo; situation when it comes to your tests. Do you find automation is sidelined from the software development process, and treated as an afterthought?</p>
<p>What you need is a little collaboration!</p>
<p>I am Qambar Raza, the Senior Software Engineer in Test for iPlayer and Sounds.<br />And<br />I am Sheila Bhati, I&rsquo;m a Senior Tester in iPlayer mobile.</p>
<p>We decided to collaborate on this blog post, about our experiences of how we&rsquo;ve helped improve collaboration within our teams.</p>
<h4>Change understood by everyone</h4>
<p><strong>Scenario 1: Change understood by only one person</strong></p>
</div>
<div class="component">
    <img class="image" src="https://ichef.bbci.co.uk/images/ic/320xn/p0824yfc.jpg" srcset="https://ichef.bbci.co.uk/images/ic/80xn/p0824yfc.jpg 80w, https://ichef.bbci.co.uk/images/ic/160xn/p0824yfc.jpg 160w, https://ichef.bbci.co.uk/images/ic/320xn/p0824yfc.jpg 320w, https://ichef.bbci.co.uk/images/ic/480xn/p0824yfc.jpg 480w, https://ichef.bbci.co.uk/images/ic/640xn/p0824yfc.jpg 640w, https://ichef.bbci.co.uk/images/ic/768xn/p0824yfc.jpg 768w, https://ichef.bbci.co.uk/images/ic/896xn/p0824yfc.jpg 896w, https://ichef.bbci.co.uk/images/ic/1008xn/p0824yfc.jpg 1008w" sizes="(min-width: 63em) 613px, (min-width: 48.125em) 66.666666666667vw, 100vw" alt=""></div>
<div class="component prose">
    <p>Have you come across a project which heavily relies on a certain member of the team knowing everything about it ? During stand-ups you will see that person talking about what&rsquo;s right and what is not and the whole team would nod in agreement.</p>
<p><strong>Scenario 2: Change is partially understood by everyone</strong></p>
<p>What about releases? Have you ever been scared to release something to production because you don&rsquo;t know what else is included in the release apart from your own change? This becomes more complicated with time when the change starts bundling up to a point where you are releasing 50 commits in one release. How confident would you be with that?</p>
<p><strong>How did we solve it?</strong></p>
<p>For scenario 1, we propose<a href="https://en.wikipedia.org/wiki/Extreme_programming"> XP practices</a> which means constant pairing on writing code and cycling the pairs for tickets to spread the knowledge across the team.</p>
<p>For scenario 2, everyone in the team should have the ability to understand the change that is being made in the system clearly. This is only possible if the size of change is kept small. In an ideal world that would be one atomic commit that gets released to production after it goes through all the phases of Software Development Lifecycle.</p>
<p>If change is small, then it will be easy to understand and communicate, not only within the team but also outside the team to stakeholders. Everyone in the software delivery chain would be aware of what and why the change was made and would be able to support their customers better.</p>
<p>We understand that sometimes it is not possible to do atomic commits and the change could include multiple commits. Although we highly discourage this practice we have a temporary solution. Within a department where multiple teams are involved in the change you can use the method of &ldquo;huddle&rdquo; when the change is being released to production. It helps you share knowledge between teams about the changes committed by members of each team and ask questions about risks.</p>
<p>The problem with doing a bundled release is the rollback or revert process. For example you release five changes together to live and, out of the five, one of the features is completely broken. How would you deal with that? Would you just do a full rollback? Or do you prefer a roll forward approach or patch based approach? The roll forward approach takes longer as you would be putting a &ldquo;proper&rdquo; fix in the release and for that you will need developer effort. The patch approach since its a quick fix there are chances of side effects. Rollback would mean that all the features that you just demonstrated to your customers are taken away from them. This will damage your trust with the customers and could have a big impact on your organisations reputation.</p>
<p>A better approach is to keeping things simple, releasing everything you have from commit to production has several benefits:</p>
<ul>
<li>Team members are happy and onboard as they understand the change</li>
<li>Customer happy as less disruption and confusion in the product</li>
<li>Stakeholders will be happy because they can understand what is live and what did not</li>
<li>The Test Team would not have to create complex test scenarios in order to approve the change which means testing would be faster</li>
<li>Your overall pipeline process would be smoother and faster</li>
</ul>
<p>In order to understand the impact of a change as a team, we need to create a shared understanding of the change. So how can teams go about building that shared understanding?</p>
<h4>Help create a shared understanding between developers and testers by encouraging developer and tester pairing</h4>
</div>
<div class="component">
    <img class="image" src="https://ichef.bbci.co.uk/images/ic/320xn/p0824yjm.jpg" srcset="https://ichef.bbci.co.uk/images/ic/80xn/p0824yjm.jpg 80w, https://ichef.bbci.co.uk/images/ic/160xn/p0824yjm.jpg 160w, https://ichef.bbci.co.uk/images/ic/320xn/p0824yjm.jpg 320w, https://ichef.bbci.co.uk/images/ic/480xn/p0824yjm.jpg 480w, https://ichef.bbci.co.uk/images/ic/640xn/p0824yjm.jpg 640w, https://ichef.bbci.co.uk/images/ic/768xn/p0824yjm.jpg 768w, https://ichef.bbci.co.uk/images/ic/896xn/p0824yjm.jpg 896w, https://ichef.bbci.co.uk/images/ic/1008xn/p0824yjm.jpg 1008w" sizes="(min-width: 63em) 613px, (min-width: 48.125em) 66.666666666667vw, 100vw" alt=""></div>
<div class="component prose">
    <p>When work in a team isn&rsquo;t aligned and automation is put in place retrospectively rather than at the time of development, it can increase the burden on manual exploratory testing and ultimately increase risk of bug leakage. Which potentially requires rework on issues found late on in the development cycle. Collaboration between Dev and Test teams helps to ensure test automation is part of the definition of done. By not having to develop automation retrospectively you can maintain focus on the current task, smooth the process of delivery, and the team will have more confidence to deliver features faster.</p>
<p>In our experience by introducing pairing practices, dev and test can agree which tests are required and where to include them at UI level, integration or unit test level. This helps in a number of ways:</p>
<ul>
<li>Creating a shared understanding of test coverage</li>
<li>Reducing duplication of tests and the effort to maintain them</li>
<li>Exploratory testing focuses more ways to improve a product, rather than being a bug safety net.</li>
</ul>
<p>Overall it contributes to a reduction of risk and can help teams go faster.</p>
<p>This is just one blog post out of the series that we are planning to write to share our experience with you. Please feel free to comment and share how you collaborate within your teams.</p>
</div>
]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>10 guidelines on readability and consistency when writing integration tests</title>
      <description><![CDATA[Some guidelines on integration testing.]]></description>
      <pubDate>Wed, 24 Oct 2018 11:57:59 +0000</pubDate>
      <link>https://www.bbc.co.uk/blogs/internet/entries/e9356c0f-cf17-4ccc-bb05-2fa5d13fa289</link>
      <guid>https://www.bbc.co.uk/blogs/internet/entries/e9356c0f-cf17-4ccc-bb05-2fa5d13fa289</guid>
      <author>Qambar Raza</author>
      <dc:creator>Qambar Raza</dc:creator>
      <content:encoded><![CDATA[<div class="component prose">
    <p>According to the Oxford English Dictionary, readability means: &ldquo;The quality of being easy or enjoyable to read.&rdquo; Is your Integration Testing Framework easy and enjoyable to read?</p>
<p>We identified two different levels of integration testing in TV Platform, System Integration and System Components Integration. This blog post focuses on the guidelines for &ldquo;System Components Integration&rdquo; Level.</p>
</div>
<div class="component">
    <img class="image" src="https://ichef.bbci.co.uk/images/ic/320xn/p06pqpms.jpg" srcset="https://ichef.bbci.co.uk/images/ic/80xn/p06pqpms.jpg 80w, https://ichef.bbci.co.uk/images/ic/160xn/p06pqpms.jpg 160w, https://ichef.bbci.co.uk/images/ic/320xn/p06pqpms.jpg 320w, https://ichef.bbci.co.uk/images/ic/480xn/p06pqpms.jpg 480w, https://ichef.bbci.co.uk/images/ic/640xn/p06pqpms.jpg 640w, https://ichef.bbci.co.uk/images/ic/768xn/p06pqpms.jpg 768w, https://ichef.bbci.co.uk/images/ic/896xn/p06pqpms.jpg 896w, https://ichef.bbci.co.uk/images/ic/1008xn/p06pqpms.jpg 1008w" sizes="(min-width: 63em) 613px, (min-width: 48.125em) 66.666666666667vw, 100vw" alt=""><p><em>TV Platform Test Pyramid</em></p></div>
<div class="component prose">
    <p>In the above example pyramid, TAP (Television Application Platform) is the primary launch mechanism for all our TV apps (iPlayer, News, Sport, RB+, Live Experience).</p>
<h4>So how did it begin?</h4>
<p>In August 2017, <a href="https://github.com/ariya/phantomjs/issues/15105">the PhantomJS repository was officially declared abandoned</a>. This prompted us to investigate an alternative tool for running our integration tests. After several discussions and experiments we decided to build our new framework based on Puppeteer and JEST. This meant that we needed to migrate from Casper tests (which heavily relied on PhantomJS) to Puppeteer.<br />It gave us an &ldquo;opportunity&rdquo; to think about our current tests. We started asking ourselves questions like &ldquo;Do we need to migrate this?&rdquo;, &ldquo;Is this still relevant?&rdquo;, &ldquo;What are we testing here?&rdquo;. So instead of a lift-and-shift approach we decided to do cleanup and rewrite.</p>
<p>Cleanup and rewrite isn&rsquo;t as easy as we thought, some of the tests were written by people who had already moved to different teams and some of the tests were testing multiple scenarios; others were doing unnecessary user journeys. With time, the framework had became a hoarder's palace. It was time for a change!</p>
<p>We started asking ourselves questions like how to create consistency on a single framework worked on by multiple teams. Every team has a different culture and method of implementation - how could we get everyone to agree on following one approach? We called a meeting between all teams and asked for test representation to provide suggestions. The most common suggestion was, &ldquo;We need integration test guidelines&rdquo;.</p>
<p>This blog post talks about the Integration Test Guidelines which we have written based on the experience and lessons learned from the previous framework in order to remind us not to fall into the same trap again.</p>
</div>
<div class="component">
    <img class="image" src="https://ichef.bbci.co.uk/images/ic/320xn/p06pqpy2.jpg" srcset="https://ichef.bbci.co.uk/images/ic/80xn/p06pqpy2.jpg 80w, https://ichef.bbci.co.uk/images/ic/160xn/p06pqpy2.jpg 160w, https://ichef.bbci.co.uk/images/ic/320xn/p06pqpy2.jpg 320w, https://ichef.bbci.co.uk/images/ic/480xn/p06pqpy2.jpg 480w, https://ichef.bbci.co.uk/images/ic/640xn/p06pqpy2.jpg 640w, https://ichef.bbci.co.uk/images/ic/768xn/p06pqpy2.jpg 768w, https://ichef.bbci.co.uk/images/ic/896xn/p06pqpy2.jpg 896w, https://ichef.bbci.co.uk/images/ic/1008xn/p06pqpy2.jpg 1008w" sizes="(min-width: 63em) 613px, (min-width: 48.125em) 66.666666666667vw, 100vw" alt=""><p><em>Guideline to Readable Tests</em></p></div>
<div class="component prose">
    <p>So here they are:</p>
<h4>Guidelines</h4>
<ul>
<li>Avoid unnecessary user journeys</li>
<li>Write atomic independent tests</li>
<li>Don't overdo the DRY principle</li>
<li>Abstract tests using Group by Intent not by page(s)</li>
<li>Don't couple domain knowledge with reusable core functionality</li>
<li>Don&rsquo;t use artificial delays</li>
<li>Don't use dependencies which are loaded externally</li>
<li>Use anatomically correct test language</li>
<li>Re-use rather than re-invent</li>
<li>Shared code should be reusable and adoptable.</li>
</ul>
<h4>What does it mean ?</h4>
<p><strong>Avoid unnecessary user journeys</strong></p>
<p>If you are testing the video player, please test the video player. Don&rsquo;t open the homepage, click on an icon to play the video then go to video player then test it. Going to home page up until you reached the video player is an unnecessary user journey that you can and should avoid.</p>
<p><strong>Write atomic independent tests</strong></p>
<p>The tests should be simple and testing one aspect only. They should be independent of the other tests. Coupling another test because this test has already done half of your user journey is a bad practice and should be avoided at all times. The consequences of doing this is that you won&rsquo;t be able to pin point the exact problem when the test fails and you will spend longer in debugging the issue then you saved time to write another test. It is a <a href="https://en.wikipedia.org/wiki/False_economy">false economy</a>. It will make it harder to understand what the test is actually trying to do because its so tightly coupled with the previous test.</p>
<p><strong>Don&rsquo;t overdo the DRY principle</strong></p>
<p>DRY (Do Not Repeat Yourself) is a very well known principle used in the industry, but it is good to understand when to avoid the DRY principle. Duplication is an obvious problem for maintenance, but there's a secondary meaning to the DRY Principle... when adding new features to the framework or test, it should take the fewest steps possible with a minimum of repetition. Sometimes it is taken to an extreme where it would make the code difficult to read, in that case it is preferred to write WET (write everything twice) code.</p>
<p>Sandy Metz illustrates this really well in her talk titled &lsquo;<a href="https://www.youtube.com/watch?v=8bZh5LMaSmE">All the Little Things</a>&rsquo;</p>
<p>&nbsp;-&nbsp;<em>&ldquo;Duplication is far cheaper than the wrong abstraction&rdquo; - Sandy Metz, RailsConf 2014.</em></p>
<p>So, if we&rsquo;re willing to tolerate some duplication, how do we avoid bugs caused by code duplication? One solution is to write a test that fails when the one piece of logic changes but the other does not.</p>
<p>Just remember, readability makes it easier to debug.</p>
<p><strong>Abstract tests using Group by Intent not by page(s).</strong></p>
<p>User flow is not equal to order of pages, therefore grouping should be done based on the responsibility of the code and not by order of the pages.</p>
<p>As mentioned by Soumya Swaroop in her blog post <a href="https://blog.getgauge.io/are-page-objects-anti-pattern-21b6e337880f">The Page Objects anti pattern</a></p>
<p><em>- <strong>Use the right abstractions!</strong> Group by intent <strong>not</strong> page(s).</em></p>
<p>and Robert C. Martin said in his book:</p>
<p>- "<em>A class should have only one reason to change&rdquo;</em> - <em>Agile Software Development, Principles, Patterns, and Practices. Prentice Hall. p. 95. ISBN 978-0135974445</em></p>
<p>When you don&rsquo;t use the right kind of abstraction you increase the complexity of the code, hence making it difficult to read and understand your tests.</p>
<p><strong>Don't couple domain knowledge with reusable core functionality</strong></p>
<p>If you would like to verify some statistics event has been fired or not and your test also compares the objects of the result that has been fetched, don&rsquo;t couple them together. Keep the reusable &ldquo;comparison&rdquo; bit part of your core and keep recording of the fired stats separate which is a part of your domain knowledge which understands what URLs will have what kind of parameters and in what pattern.</p>
<p><strong>Don&rsquo;t use artificial delays</strong></p>
<p>Write tests which are eventful rather than time-based, don&rsquo;t rely on waits or things to happen within X seconds otherwise, you will end up increasing the time until the test passes and that test will start failing in future when the &ldquo;ideal&rdquo; conditions change. This is not a healthy practice. It is actually an insult to your super fast resources which will always become better with time.</p>
<p><strong>Don't use dependencies which are loaded externally</strong></p>
<p>Have you thought about the dependency from which you are calling the function may or may not be loaded at the the time of invoking this function? This will result in a flakiness behaviour in your test. It may always pass on your local but not all environments are the same.</p>
<p>However, you can reference and load the external dependency internally and manage it as a part of your framework. It should not be loaded as a part of the application that you are testing.</p>
<p><strong>Use anatomically correct test language</strong></p>
<p>As an example, instead of coming up with an entirely new way of doing a NOT check:</p>
</div>
<div class="component">
    <img class="image" src="https://ichef.bbci.co.uk/images/ic/320xn/p06pqq5c.jpg" srcset="https://ichef.bbci.co.uk/images/ic/80xn/p06pqq5c.jpg 80w, https://ichef.bbci.co.uk/images/ic/160xn/p06pqq5c.jpg 160w, https://ichef.bbci.co.uk/images/ic/320xn/p06pqq5c.jpg 320w, https://ichef.bbci.co.uk/images/ic/480xn/p06pqq5c.jpg 480w, https://ichef.bbci.co.uk/images/ic/640xn/p06pqq5c.jpg 640w, https://ichef.bbci.co.uk/images/ic/768xn/p06pqq5c.jpg 768w, https://ichef.bbci.co.uk/images/ic/896xn/p06pqq5c.jpg 896w, https://ichef.bbci.co.uk/images/ic/1008xn/p06pqq5c.jpg 1008w" sizes="(min-width: 63em) 613px, (min-width: 48.125em) 66.666666666667vw, 100vw" alt=""></div>
<div class="component prose">
    <p><strong>Re-use rather than re-invent</strong></p>
<p>If the code is already present don&rsquo;t rewrite it to be slightly better. Re-use the existing functionality, more code means more maintenance. Make the core better so that every body can benefit from it.</p>
<p>If you see lots of repeated code in your framework, this is a red flag for &ldquo;lack of communication&rdquo; between the teams. This should be identified and addressed ASAP.</p>
<p>If you are unsure if the code already exists and you think it should have been there, ASK !</p>
<p><strong>Shared code should be reusable and adoptable</strong></p>
<p>With the right set of documentation around the functions its easy to understand the intent and adopt the library function. Therefore, ensure that your sharable code is wrapped with information that makes it easy to re-use and adopt.</p>
<p>Thats all folks!</p>
<p>So it would be quixotic(extremely idealistic) to say 100% of the tests will follow these guidelines. Our strategy is to influence the teams by sharing it as much as possible and getting more like-minded representation from each crew to help us implement it in their ownership.</p>
<p>Share your opinion with us to help us improve it further in the comments below.</p>
<p>I am keen on doing a follow-up post based on the interest so if you have any questions feel free to comment it and ill try to answer your question in the next one.</p>
<p>Cheers!</p>
</div>
]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Testing BBC Connected Red Button</title>
      <description><![CDATA[How we test the BBC Connected Red Button service on smart TV devices.]]></description>
      <pubDate>Fri, 08 Aug 2014 04:39:29 +0000</pubDate>
      <link>https://www.bbc.co.uk/blogs/internet/entries/41a89085-ab51-3454-a3a2-22799b0bfd50</link>
      <guid>https://www.bbc.co.uk/blogs/internet/entries/41a89085-ab51-3454-a3a2-22799b0bfd50</guid>
      <author>Krishnan Sambasivan</author>
      <dc:creator>Krishnan Sambasivan</dc:creator>
      <content:encoded><![CDATA[<div class="component prose">
    <p>Hello,</p><p>I am Krishnan, part of the team that brought <a href="http://www.bbc.co.uk/faqs/online/connected_red_button">Connected Red Button </a>service to TVs. I'm a Junior Developer-in-Test working in the Platform Test team at MediaCityUK in Salford. This post is about how we test the BBC Connected Red Button service on Smart TV devices.</p><p></p>
</div>
<div class="component">
    <img class="image" src="https://ichef.bbci.co.uk/images/ic/320xn/p024c22b.jpg" srcset="https://ichef.bbci.co.uk/images/ic/80xn/p024c22b.jpg 80w, https://ichef.bbci.co.uk/images/ic/160xn/p024c22b.jpg 160w, https://ichef.bbci.co.uk/images/ic/320xn/p024c22b.jpg 320w, https://ichef.bbci.co.uk/images/ic/480xn/p024c22b.jpg 480w, https://ichef.bbci.co.uk/images/ic/640xn/p024c22b.jpg 640w, https://ichef.bbci.co.uk/images/ic/768xn/p024c22b.jpg 768w, https://ichef.bbci.co.uk/images/ic/896xn/p024c22b.jpg 896w, https://ichef.bbci.co.uk/images/ic/1008xn/p024c22b.jpg 1008w" sizes="(min-width: 63em) 613px, (min-width: 48.125em) 66.666666666667vw, 100vw" alt=""><p><em>The Connected Reb Button service available on connected TV devices</em></p></div>
<div class="component prose">
    <p><strong>Team Structure</strong></p><p>Testers are embedded within our agile development teams, as Test Engineers or Developers-in-Test. We work very closely with Software Developers, Product Owners, Project Managers and Business analysts to develop and test software to ensure we make a great end product.</p><p><strong>Process</strong></p><p>The team operates a pull-based workflow where we set a limit on our work in progress (WIP). This is set to four tasks, with each tester supporting two tasks each.</p><p></p>
</div>
<div class="component">
    <img class="image" src="https://ichef.bbci.co.uk/images/ic/320xn/p024c25h.jpg" srcset="https://ichef.bbci.co.uk/images/ic/80xn/p024c25h.jpg 80w, https://ichef.bbci.co.uk/images/ic/160xn/p024c25h.jpg 160w, https://ichef.bbci.co.uk/images/ic/320xn/p024c25h.jpg 320w, https://ichef.bbci.co.uk/images/ic/480xn/p024c25h.jpg 480w, https://ichef.bbci.co.uk/images/ic/640xn/p024c25h.jpg 640w, https://ichef.bbci.co.uk/images/ic/768xn/p024c25h.jpg 768w, https://ichef.bbci.co.uk/images/ic/896xn/p024c25h.jpg 896w, https://ichef.bbci.co.uk/images/ic/1008xn/p024c25h.jpg 1008w" sizes="(min-width: 63em) 613px, (min-width: 48.125em) 66.666666666667vw, 100vw" alt=""><p><em>CRB Kanban board</em></p></div>
<div class="component prose">
    <p>Developers pro-actively identify a tester to pair with who has the capacity to support the work moved to in-progress on the <a href="http://en.wikipedia.org/wiki/Kanban_(development)">KANBAN</a> board. This has helped us regulate work better, to avoid bottlenecks and <a href="http://dictionary.reference.com/browse/overburdening">overburdening</a>. It has improved the engagement of testers and has resulted in more testing happening earlier in the workflow. It has also helped to breakdown the old divides of developer and tester ensuring we work collaboratively by developing and testing in parallel. Overall, the quality and the flow of work to "ready to deploy" has improved. </p><p>We use <a href="http://en.wikipedia.org/wiki/Behavior-driven_development">Behavior Driven Development </a>(BDD) tools like cucumber to help us organise and automate our acceptance criteria. <a href="http://agile-wiki.wikispaces.com/Tips+for+Writing+Testable+Acceptance+Criteria">Testable Acceptance Criteria</a> are created collaboratively for the feature to be developed; issues can be detected and even dealt with at this early stage. It also helps ensures that everyone has a clear view of what is required and importantly that the product developed and tested relates directly to what the product owner wanted in the first place!</p><p>We use <a href="http://en.wikipedia.org/wiki/Feature_toggle">feature toggling </a>as a way of letting us ship releasable software any time we want. Even if we are in the middle of building a new feature that isn't ready for users to see. Whenever we develop a new feature it will be ‘feature toggled’ on or off according to its readiness and then a build is created for testing.</p><p>We initially test the feature running on a VM (Virtual Machine) and if we find any easily fixable bugs, they are fixed immediately and released in a new build on the VM. We monitor the health of builds using build verification tests running on a browser, as code is continually integrated into trunk and builds are created periodically</p><p></p>
</div>
<div class="component">
    <img class="image" src="https://ichef.bbci.co.uk/images/ic/320xn/p024c1ny.jpg" srcset="https://ichef.bbci.co.uk/images/ic/80xn/p024c1ny.jpg 80w, https://ichef.bbci.co.uk/images/ic/160xn/p024c1ny.jpg 160w, https://ichef.bbci.co.uk/images/ic/320xn/p024c1ny.jpg 320w, https://ichef.bbci.co.uk/images/ic/480xn/p024c1ny.jpg 480w, https://ichef.bbci.co.uk/images/ic/640xn/p024c1ny.jpg 640w, https://ichef.bbci.co.uk/images/ic/768xn/p024c1ny.jpg 768w, https://ichef.bbci.co.uk/images/ic/896xn/p024c1ny.jpg 896w, https://ichef.bbci.co.uk/images/ic/1008xn/p024c1ny.jpg 1008w" sizes="(min-width: 63em) 613px, (min-width: 48.125em) 66.666666666667vw, 100vw" alt=""><p><em>CRB build monitor</em></p></div>
<div class="component prose">
    <p>If a feature requires issues to be resolved then they will be triaged and added to the product backlog. When a viable VM tested build is ready it is released onto our TEST environment. Here we fully test the new feature that has been developed on a TV device and carry out a regression test of major existing features in the application using a combination of automated and manual testing. This is how we work now but it is constantly evolving as we try to continuously improve. </p><p><strong>Automated Testing</strong></p><p>Automated test are created as features are developed, using a <a href="http://cukes.info/">Cucumber</a> and <a href="https://www.ruby-lang.org/en/">Ruby</a> framework which includes a message queue that sets up a communication channel between a browser or a device. These message queues are created on the fly whenever they are needed, and messages and responses are sent and received using simple HTTP requests. The test results obtained from the automation execution on devices are pushed automatically into our Test case management tool and from that we can produce a combined and dynamic view of product coverage and automated device execution in our test dashboard.</p><p></p>
</div>
<div class="component">
    <img class="image" src="https://ichef.bbci.co.uk/images/ic/320xn/p024c23c.jpg" srcset="https://ichef.bbci.co.uk/images/ic/80xn/p024c23c.jpg 80w, https://ichef.bbci.co.uk/images/ic/160xn/p024c23c.jpg 160w, https://ichef.bbci.co.uk/images/ic/320xn/p024c23c.jpg 320w, https://ichef.bbci.co.uk/images/ic/480xn/p024c23c.jpg 480w, https://ichef.bbci.co.uk/images/ic/640xn/p024c23c.jpg 640w, https://ichef.bbci.co.uk/images/ic/768xn/p024c23c.jpg 768w, https://ichef.bbci.co.uk/images/ic/896xn/p024c23c.jpg 896w, https://ichef.bbci.co.uk/images/ic/1008xn/p024c23c.jpg 1008w" sizes="(min-width: 63em) 613px, (min-width: 48.125em) 66.666666666667vw, 100vw" alt=""><p><em>CRB Test dashboard</em></p></div>
<div class="component prose">
    <p>Our automation tests use a combination of live data and <a href="http://wiki.answers.com/Q/What_is_canned_data">canned data </a>from a static data provider. It is very useful during development of new features when the service layer code that supplies the data in normal operation is still under development.</p><p><strong>Regression and Risk based testing</strong></p><p>The regression testing of a product is a combination of automated and manual testing. Product coverage is set by prioritising areas of change in the code and key existing features that would cause the greatest impact to the product if broken.</p><p>Using a risk based test scope we can reduce our overall product regression effort whilst managing the risk that significant application issues could be introduced. The supported device list keeps increasing and we also take a risk-based approach to our selection of devices by selecting some representative models from each manufacturer to test.</p><p><strong>Summary</strong></p><p>Previously when we had only a small amount of automation a lot of time was spent manually testing builds on a device. Sometimes we needed to repeat this testing once a build was found to have bugs. With more automation both in validating developed builds and testing on devices, we can find bugs faster and spent less effort and time manual testing. Additionally process experiments across the whole development team have found ways to improve the flow of work, making it possible to increase the number of supported devices and still introduce new features into CRB quickly.</p><p><em>Krishnan Sambasivan is a Junior Developer-in-Test in Platform Test, BBC Future Media</em></p><p> </p>
</div>
]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
  </channel>
</rss>
