· articles · 4 min read

By Arti Bandi

Conditional Routing in Broadcom DevTest

How to build a conditional API routing in Broadcom DevTest VSEs and in Beeceptor. Using rule-based mock + proxy model for hybrid API virtualization makes this extremely easier.

If you are using Broadcom DevTest, you would hit this requirement during API virtualization some or the other day.

If the request matches specific attributes, return a virtual response. Otherwise route the request to a live backend.

This is a very common testing scenario for integrations, hybrid environments, or failure scenarios without virtualizing the entire system.

There isn’t a cleaner documentation around this, and I say a recent discussion in the DevTest community asked exactly the same. This question remain unanswered for long, that made me to write a detailed writeup here. The requirements are:

  • Match request attributes
  • Return different virtual responses based on payload values
  • Forward unmatched requests to a live endpoint

Here’s how to implement it in DevTest, and how the same workflow looks in Beeceptor.

Conditional Routing In DevTest

In Broadcom DevTest, this setup is typically implemented using:

  • VSE (Virtual Service Environment)
  • VSM (Virtual Service Model)
  • Request matchers
  • Live Invocation Steps

The main configuration happens inside the VSM editor.

Step 1: Match Request Attributes

Inside the VSM, you configure request matching conditions based on request content.

The matcher can inspect:

  • JSON fields
  • XML nodes
  • Query parameters
  • HTTP headers
  • Raw request body patterns

For example, you might inspect a JSON payload:

{
  "customerType": "premium"
}

and route that request to a premium mock response.

Another request containing:

{
  "customerType": "blocked"
}

could return a failure response instead.

Step 2: Create Multiple Virtual Responses

Inside the VSE, you can configure multiple request/response pairs.

Each pair can generate a different response:

  • success payloads
  • validation failures
  • authorization errors
  • delayed responses
  • fault injection

This allows DevTest to simulate many runtime scenarios within the same virtual service. For example:

  • premium users receive mocked discount pricing
  • blocked users receive HTTP 403
  • internal test accounts receive delayed responses
  • unmatched traffic continues to the live API

Step 3: Forward Unmatched Requests To Live API

This is handled using a Live Invocation Step inside the VSM.

Once configured, DevTest forwards unmatched requests to the real backend service and relays the live response back to the client.

This creates a partial virtualization model where:

  • some requests are mocked
  • some are transformed
  • some go directly to production-like environments

Where is the Struggle

DevTest give you flexibility and, but also you are required to maintain complex routing logic inside VSMs, and this difficult over time. As more request matchers and response pairs get added, teams often run into issues such as:

  • understanding rule execution flow
  • de-bugging which matcher actually triggered
  • managing large VSM trees

Doing The Same In Beeceptor

Beeceptor supports mocking rules and proxying together within the same endpoint. The request processing model is inherently simple: all the incoming requests are first evaluated against a rules-engine, and if no rule matches, the request can automatically fall back to an upstream target or live backend.

These mock rules can be created from the UI, or via APIs. You can version them in your github/source-control. These are executed strictly from top to bottom. The first matching rule wins. This makes the routing behavior predictable even when multiple rules overlap.

Each rule can contain multiple matching conditions using AND logic. A rule can match based on:

  • request path
  • JSON body parameters
  • headers
  • query parameters
  • regular expressions
  • stateful variables / stored in database.

For example, a rule can match:

customer.type = premium
AND
region = us-east

and immediately return a mocked payload without forwarding the request anywhere.

Two Ways To Proxy Requests In Beeceptor

Beeceptor offers proxying in two different ways depending on how much control you want over request routing.

1. Global Fallback Proxy

The simplest setup is configuring a fallback upstream target. This works well when you want to selectively mock only a few scenarios while allowing all remaining traffic to continue normally to the live service.

In this mode:

  • mock rules are evaluated first
  • if no rule matches
  • the request is automatically forwarded to the real backend

Example:

/oauth/token → mocked
/payments/failure → mocked
everything else → real staging API

This creates a partial virtualization setup with very little configuration overhead.

2. Per-Request Proxy Using HTTP Callout Rules

Beeceptor also supports request-level proxying through HTTP Callout Rules. This is more dynamic than a fallback proxy because the forwarding itself becomes conditional. An HTTP Callout Rule allows you to:

  • filter requests based on request parameters
  • selectively forward only matching requests
  • transform payloads before forwarding
  • send requests to completely different upstream systems

The HTTP callout can work in two modes depending on the testing scenario.

  • In synchronous mode, Beeceptor forwards the request to the upstream API, waits for the response and then returns that response back to the client. This is used when testing live integrations or dynamic responses.
  • In asynchronous mode, the HTTP callout triggers the upstream request in the background without blocking the client response or the invoker. You should use this for webhook simulations, event-driven architectures, audit logging, notification workflows, or side-effect testing. This isn’t a use-case for the main objective of this post.

Since routing can depend on headers, payload values, or authentication tokens, this approach works well for complex multi-tenant API integrations. [Top]

Back to Blog