UI Smoke Tests With Playwright

Primary image for UI Smoke Tests With Playwright

Testing a single-page application (SPA) can be difficult, especially as the application grows more complex. While it's important to make sure components work as expected, many teams struggle with setting up data for automated component tests, mocking component data, or finding time to manually test meaningful user interactions. One of the tools we use to make SPA testing more manageable is Playwright.

Background

Playwright is a modern end-to-end testing framework that enables reliable testing of web applications across multiple browsers (Chrome, Firefox, Safari, and Edge). It allows developers to automate user interactions and test simple or complex web applications. For documentation, see https://playwright.dev/.

Why Smoke Tests?

While it can be useful to test each component and each interaction, we wanted to begin with something small and useful, using a little effort to provide substantial benefit, leaving it possible to extend the tests later.

Approach 1: Playwright Against A Local Test Server

Our first approach involved setting up a test server against which Playwright could run, and test files for key components of our single page application. However, we quickly ran into issues of having to mock an increasing amount of component data and API calls, which over time diverged from the real application.

Approach 2: Playwright Tests Against A Live Server

Our next approach involved setting up a user workflow that a user was expected to go through on our application, and automating this workflow using Playwright against a live server. Using a live server allowed us to use real API calls, removing the need to mock API calls, and it also gave us quick failures when an API endpoint changed in a way that the SPA did not expect. Note: “live server” here means a “staging” or “dev” environment, and not the production site.

First, we wrote out a handful of steps for the user to follow. These were considered a typical "happy path" workflow in our application:

a. Type in product title and select a product.
b. Select a color ("red").
c. Click the "Add Uploads" button, and make sure you're on the "Uploads" tab.
d. Type in a note
e. Click on the "Add Attachments" button and upload an attachment.
f. Click "Summary".
g. Find the "Download Receipt" button, and click it

Next, we took these steps, and made a Playwright test for them. We made a e2e-tests directory, and put a createReceiptPDFLifecycle.spec.ts file in it with this code:

import { expect, test, type Page } from "@playwright/test";


test.describe("Create receipt PDF:", () => {
  test.beforeEach(async ({ page, baseURL }) => {
    // Go to the homepage.
    await page.goto("/");
  });


  test("End-to-end workflow for creating a receipt PDF", async ({ page }) => {
   
    // a. Type in product title and select a product.
    await page.click("text=Search For");
    await page.locator("#title input").fill("example");
    await page.click("text=example car 1");
    await page.waitForLoadState("networkidle");
    console.log("✓ Select a product.");


    // b. Select a color ("red")..
    await expect(page.locator("#id_color input")).toBeEnabled();
    await page.locator("#id_color").click();
    await page.locator('ul li:has-text("Red")').click();
    console.log("✓ Select a color ('Red').");


    // c. Click the "Add Uploads" button, and make sure you're on the "Uploads" tab.
    await expect(
    await page.locator("a", { hasText: "Add Uploads" }).isEnabled()).toBeTruthy();
    await page.locator("a", { hasText: "Add Uploads" }).click();
    console.log("✓ Click the 'Add Uploads' button, and make sure you're on the 'Uploads' tab.");


    // d. Type in a note
    const detailsElement = page.getByLabel("Rich text editor");
    await detailsElement.click();
    await detailsElement.fill("Note 1");
    console.log("✓ Type in a note.");


    // e. Click on the "Add Attachments" button and upload an attachment.
    const inputElement = page.locator("#id_attachment").locator("input");
    const filePath = "./e2e-tests/cats.jpg";
    await inputElement.setInputFiles(filePath);
    await page.waitForLoadState("networkidle"); // Wait for upload to happen.
    console.log("✓ Click on the 'Add Attachments' button and upload an attachment.");


    // f. Click "Summary".
    const submitButtonLocator = await page.locator("button", {hasText: "Summary"});
    await expect(await submitButtonLocator.isEnabled()).toBeTruthy();
    await submitButtonLocator.click();
    console.log("✓ Click 'Summary'.");


    // g. Find the "Download Receipt" button, and click it
    await expect(await page.getByTestId("Download Receipt")).toBeVisible({ timeout: 60000 });
    await page.getByTestId("Download Receipt").click({ timeout: 15000 });
    console.log("✓ Download Receipt.");
  });
});

Next, we configured Playwright to run against a live server. Practically, this meant:

1. creating a separate Playwright configuration file for the user workflow tests

// playwright-e2e.config.ts
import { defineConfig, devices } from "@playwright/test";
export default defineConfig({
  testDir: "./e2e-tests",
  /* Run tests in files in parallel */
  fullyParallel: true,
  /* Shared settings for all the projects below. See https://playwright.dev/docs/api/class-testoptions. */
  use: {
   /* Base URL to use in actions like `await page.goto('/')`. */
   baseURL: "https://testserver.our-application.com",
   /* Collect trace when retrying the failed test. See https://playwright.dev/docs/trace-viewer */
   trace: "on-first-retry",
  },
  timeout: 30000, // Timeout for all tests.
  /* Configure tests for just 1 major browser */
  projects: [
    {
      name: "firefox",
      use: { ...devices["Desktop Firefox"] },
    },
  ],
});

2. adding a new command to our package.json file

"test:e2e": "playwright test --config=playwright-e2e.config.ts",

3. making sure we handle any data created on the live server during the test. To do this, we created a deletePDF() helper function in our Playwright test, which deleted created objects by going to the Django admin interface.

const deletePDF = async (page: Page) => {
  // Go to the Django admin and delete the object.
  const pdfId = page.url().split("/pdfs/")[1].split("/")[0];
  const urlToDelete = `/admin/pdfs/pdf/${pdfId}/delete/`;
  await page.goto(urlToDelete);
  await page.locator("input[value='Yes, I’m sure']").click();
  console.log("✓ Delete the PDF.");
};

And we added that as the last step in our test:

// h. Delete the created PDF
    await deletePDF(page);

Further Considerations

When implementing Playwright tests against a live server, there are several additional factors to consider:

  • Authentication: Testing against a live server typically requires handling user authentication. This can be managed through helper functions that programmatically log in users and store credentials securely, rather than hardcoding sensitive information in test files.
  • Test Execution Frequency: Teams need to decide when these tests should run. Options include running on every pull request, running on releases, or running on a scheduled basis (for example, nightly). The decision often depends on test execution time and the criticality of the workflows being tested. It can also point to different sources of issues. For example, running right after a deploy can point to when the deploy (and its code changes) cause a problem, but running nightly can point to when non-code changes (such as changes in an integration with another system) cause a problem.
  • Test Scope and Execution Time: End-to-end tests can be time-consuming, so it's important to focus on the most critical user workflows rather than attempting comprehensive coverage. Teams should establish reasonable time limits and prioritize tests that provide the highest value for catching regressions.
  • Server Data: Running the tests against a live server may create test data that should be removed. We implemented a deletePDF() function to delete the PDF that the test created, and other tests would need to have similar considerations. Moreover, testing against a production server can cause test data to mix with user data, so we made sure to only run the tests against a site meant for testing.

Conclusion

We found that running these tests provided us a lot more benefit than running tests against specific components, since it followed a workflow that a user may go through, and allowed us to see when key parts of that workflow stop working. It also allowed us to stop relying on (and maintaining) mock data, and use real API endpoints, as a user might.

While it's not feasible to test everything, we got a lot of benefit from running the Playwright tests for a typical user workflow against a live server.

Dmitriy Chukhin

About the author

Dmitriy Chukhin

Dmitriy is a web developer with a decade of experience working across the stack. Originally a math teacher, he became interested in web development, and, following the guidance of some great mentors, has excelled in the …

View Dmitriy's profile