If you want to learn about what Mastodon is and how it differs from Twitter you can read this excellent article.
]]>Removes ribbon, zuul, hystrix and Spring Cloud Aws support. Check this PR for more information.
Zipkin is no longer a part of core of Sleuth. You can check out more in this PR.
Up till now we’ve been supporting ON_EACH
and ON_LAST
Reactor instrumentation modes. That means that we would wrap every single Reactor operator (ON_EACH
) or the last operator (ON_LAST
). Those wrappings would do their best to put trace related entries in such a way that thread local based instrumentations would work out of the box (e.g. the MDC context, Tracer.currentSpan()
etc.). The problem was that on each wrapping downgraded performance drastically and worked most of the time. The on last operator wrapping downgraded performance a lot and worked sometimes. Both had their issues when flatMap
operators were called and thread switching took place.
With this commit we’ve introduced the manual way of instrumenting Reactor. We came to the conclusion that the thread local based paradigm doesn’t work well with Reactor. We can’t guess for the user what they really want to achieve and which operators should be wrapped. That’s why with the MANUAL
instrumentation mode you can use the WebFluxSleuthOperators
or MessagingSleuthOperators
to provide a lambda that should have the tracing context set in thread local.
With this issue we’re setting the manual instrumentation as the default one for Spring Cloud Gateway. The performance gets drastically improved and the tracing context still gets automatically propagated. If you need to do some customized logging etc. just use the WebFluxSleuthOperators
.
This issue introduces a change in the MDC keys (no more X-B3-...
entries in MDC).
Before
2019-06-27 19:36:11,774 INFO {X-B3-SpanId=e30b6a75bcff782b, X-B3-TraceId=e30b6a75bcff782b, X-Span-Export=false, spanExportable=false, spanId=e30b6a75bcff782b, traceId=e30b6a75bcff782b} some log!
After
2019-06-27 19:36:11,774 INFO {spanId=e30b6a75bcff782b, traceId=e30b6a75bcff782b} some log!
The spring-cloud-starter-zipkin
dependency is removed. You need to add spring-cloud-starter-sleuth
and the spring-cloud-sleuth-zipkin
dependency.
OpenZipkin Brave was there in Sleuth’s code as the main abstraction since Sleuth 2.0.0. We’ve decided that with Sleuth 3.0.0 we can create our own abstraction (as we do in each Spring Cloud project) so that OpenZipkin Brave becomes one of the supported tracer implementations.
With this PR we’ve introduced a new abstraction that wraps Brave. We also added support for another tracer - OpenTelemetry.
With this PR and that PR we’ve refactored Spring Cloud Sleuth to reflect Spring Boot’s module setup. We’ve split the project into API, instrumentations, auto-configurations etc. Also the documentation layout was updated to look in the same way the Spring Boot one does.
Initially, with this commit, we’ve added a spring-cloud-sleuth-otel
module inside Spring Cloud Sleuth that introduced OpenTelemetry support.
With this PR we’ve decided to move Spring Cloud Sleuth and OpenTelemetry integration to an incubator project. Once OpenTelemetry & OpenTelemetry Instrumentation projects become stable we will consider next steps.
In case of any questions don’t hesitate to ping us
With the Incremental Test Generation for Maven we’re generating tests, stubs and stubs jar only if the contracts have changed. The feature is opt-out (enabled by default).
With the
support to resolve credentials from settings.xml when using Aether based solution to fetch the contracts / stubs, we will reuse your settings.xml
credentials for the given server id (via the stubrunner.server-id
property).
It was fantastic to see so many people take part in rewriting the Spring Cloud Contract’s codebase from Groovy to Java. You can check this issue for more information.
With this issue and this pull request we’ve added an option to provide metadata
to your contracts. Since we didn’t want to map all WireMock properties to the core of our Contract definition, we’ve allowed passing of metadata under the wiremock
key. The passed value can be an actual WireMock definition. We will map that part to the generated stub.
Example of adding delays:
Contract.make {
request {
method GET()
url '/drunks'
}
response {
status OK()
body([
count: 100
])
headers {
contentType("application/json")
}
}
metadata([wiremock: '''\
{
"response" : {
"delayDistribution": {
"type": "lognormal",
"median": 80,
"sigma": 0.4
}
}
}
'''
])
That also means that you can provide your own metadata. You can read more about this in the documentation
With this pr we’ve introduced a new custom
mode of test generation. You’re able to pass your own implementation of an HTTP client (you can reuse our OkHttpHttpVerifier
), thanks to which you can e.g. use HTTP/2. This was a prerequisite for the GRPC task. Thanks to the Spring Cloud Contract Workshops and the following refactoring of Spring Cloud Contract it was quite easy to add this feature, so thanks everyone involved then!
You can read more about this in the documentation.
With the custom mode in place we could add the experimental GRPC support. Why experimental? Due to GRPC’s tweaking of the HTTP/2 Header frames, it’s impossible to assert the grpc-status
header. You can read more about the feature, the issue and workarounds in the documentation.
Here you can find an example of GRPC producer and of a GRPC consumer.
With this PR we’ve added GraphQL support. Since GraphQL is essentially POST to and endpoint with specific body, you can create such a contract and set the proper metadata. You can read more about this in the documentation.
Here you can find an example of GraphQL producer and of a GraphQL consumer.
With this issue we’ve migrated the Stub Runner Boot application to be a thin jar based application. Not only have we managed to lower the size of the produced artifact, but also we’re able via properties turn on profiles (e.g. kafka
or rabbit
profiles) that would fetch additional dependencies at runtime.
With the thin jar rewrite and this PR and this issue we’re adding support for Kafka and AMQP based solutions with the Docker images.
You’ll have to have the following prerequisites met:
triggerMessage(...)
with a String parameter that is equal to the contract’s label.Your contract can leverage the kafka
and amqp
metadata sections like below:
description: 'Send a pong message in response to a ping message'
label: 'ping_pong'
input:
# You have to provide the `triggerMessage` method with the `label`
# as a String parameter of the method
triggeredBy: 'triggerMessage("ping_pong")'
outputMessage:
sentTo: 'output'
body:
message: 'pong'
metadata:
amqp:
outputMessage:
connectToBroker:
declareQueueWithName: "queue"
messageProperties:
receivedRoutingKey: '#'
There is legitimate reason to run your contract tests against existing middleware. Some testing frameworks might give you false positive results - the test within your build passes whereas on production the communication fails.
In Spring Cloud Contract docker images we give an option to connect to existing middleware. As presented in previous subsections we do support Kafka and RabbitMQ out of the box. However, via Apache Camel Components we can support other middleware too. Let’s take a look at the following examples of usage.
Example of a contract connecting to a real RabbitMQ instance:
description: 'Send a pong message in response to a ping message'
label: 'standalone_ping_pong'
input:
triggeredBy: 'triggerMessage("ping_pong")'
outputMessage:
sentTo: 'rabbitmq:output'
body:
message: 'pong'
metadata:
standalone:
setup:
options: rabbitmq:output?queue=output&routingKey=#
outputMessage:
additionalOptions: routingKey=#&queue=output
You can read more about setting this up in this PR under the Documentation of the feature with standalone mode (aka with running middleware)
section.
Since it’s extremely easy to start a docker image with a broker via Testcontainers, we’re suggesting to slowly migrate your messaging tests to such an approach. From the perspective of Spring Cloud Contract it’s also better since we won’t need to replicate in our code the special cases of how frameworks behave when calling a real broker. Here you can find an example of how you can connect to a JMS broker on the producer side and here how you can consume it.
This one is fully done by the one and only shanman190. The whole work on the Gradle plugin was done by him so you should buy him a beer once you get to see him :) Anyways, there are various changes to the Gradle plugin that you can check out.
In case of any questions don’t hesitate to ping us
After a very long pause, finally I’ve managed to write a new blog post. It’s an interview with Jakub Kubryński about Continuous Delivery of a Startup. It’s published as part of the Java Advent Calendar.
Check it out here!.
]]>Why a new project? Cause we’ve been all doing repetitive work. Check out this post where I write about creation of a deployment pipeline. Every company does it and wastes money and resource on it. In Pivotal our goal is to give developers tools they need to deliver features as fast as possible.
Spring Cloud Pipelines gives you an opinionated deployment pipeline. You can use it straight away, you can modify it. Do whatever you please :)
The repo is setup with a demo for Concourse CI and Jenkins. Read the docs how to set it up for each of those tools. The deployment is done via Cloud Foundry. For the sake of demo we’re using PCF Dev.
I’m really happy that the project is GA. Even though as the Accurest project we had already done a GA release, it really feels that a lot of effort was put in order to release the GA version under the Pivotal’s Spring Cloud branding. Let’s look at some numbers:
That’s quite a lot of work! But there we are, with a library that has already been battle-proven on production by many companies, even before being GA as Spring Cloud Contract.
Like I mentioned, Accurest was already GA. So what are the main difference apart from rebranding and bug fixes?
These are the Spring Cloud Contract Verifier changes. Apart from that Spring Cloud Contract consists of Spring Cloud Contract WireMock support and Spring Cloud Contract RestDocs. Thanks to the first one the integration with WireMock is much more efficient and thanks to the latter you don’t have to use the Groovy DSL - you can define your stubs by yourself by attaching them to an existing RestDocs test.
As far as Spring Cloud Contract Verifier is concerned the biggest two changes are the Consumer Contract support and that you can have more than one base class for your tests. Let’s take a closer look what’s there in the docs about them…
Another way of storing contracts other than having them with the producer is keeping them in a common place. It can be related to security issues where the consumers can’t clone the producer’s code. Also if you keep contracts in a single place then you, as a producer, will know how many consumers you have and which consumer will you break with your local changes.
Let’s assume that we have a producer with coordinates com.example:server and 3 consumers: client1, client2, client3. Then in the repository with common contracts you would have the following setup (which you can checkout here:
├── com
│ └── example
│ └── server
│ ├── client1
│ │ └── expectation.groovy
│ ├── client2
│ │ └── expectation.groovy
│ ├── client3
│ │ └── expectation.groovy
│ └── pom.xml
├── mvnw
├── mvnw.cmd
├── pom.xml
└── src
└── assembly
└── contracts.xml
As you can see the under the slash-delimited groupid / artifact id folder (com/example/server
) you have expectations of the 3 consumers (client1
, client2
and client3
). Expectations are the standard Groovy DSL contract files as described throughout this documentation. This repository has to produce a JAR file that maps one to one to the contents of the repo.
Example of a pom.xml
inside the server
folder.
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="https://maven.apache.org/POM/4.0.0" xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.example</groupId>
<artifactId>server</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>Server Stubs</name>
<description>POM used to install locally stubs for consumer side</description>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>1.4.0.BUILD-SNAPSHOT</version>
<relativePath />
</parent>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<java.version>1.8</java.version>
<spring-cloud-contract.version>1.0.1.BUILD-SNAPSHOT</spring-cloud-contract.version>
<spring-cloud-dependencies.version>Camden.BUILD-SNAPSHOT</spring-cloud-dependencies.version>
</properties>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>${spring-cloud-dependencies.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<build>
<plugins>
<plugin>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-contract-maven-plugin</artifactId>
<version>${spring-cloud-contract.version}</version>
<extensions>true</extensions>
<configuration>
<!-- By default it would search under src/test/resources/ -->
<contractsDirectory>${project.basedir}</contractsDirectory>
</configuration>
</plugin>
</plugins>
</build>
<repositories>
<repository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>https://repo.spring.io/snapshot</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
<repository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/milestone</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
<repository>
<id>spring-releases</id>
<name>Spring Releases</name>
<url>https://repo.spring.io/release</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>https://repo.spring.io/snapshot</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
</pluginRepository>
<pluginRepository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/milestone</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</pluginRepository>
<pluginRepository>
<id>spring-releases</id>
<name>Spring Releases</name>
<url>https://repo.spring.io/release</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</pluginRepository>
</pluginRepositories>
</project>
As you can see there are no dependencies other than the Spring Cloud Contract Verifier Maven plugin. Those poms are necessary for the consumer side to run mvn clean install -DskipTests
to locally install stubs of the producer project.
The pom.xml
in the root folder can look like this:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="https://maven.apache.org/POM/4.0.0" xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.example.standalone</groupId>
<artifactId>contracts</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>Contracts</name>
<description>Contains all the Spring Cloud Contracts, well, contracts. JAR used by the producers to generate tests and stubs</description>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<executions>
<execution>
<id>contracts</id>
<phase>prepare-package</phase>
<goals>
<goal>single</goal>
</goals>
<configuration>
<attach>true</attach>
<descriptor>${basedir}/src/assembly/contracts.xml</descriptor>
<!-- If you want an explicit classifier remove the following line -->
<appendAssemblyId>false</appendAssemblyId>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
It’s using the assembly plugin in order to build the JAR with all the contracts. Example of such setup is here:
<assembly xmlns="https://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3"
xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="https://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3 https://maven.apache.org/xsd/assembly-1.1.3.xsd">
<id>project</id>
<formats>
<format>jar</format>
</formats>
<includeBaseDirectory>false</includeBaseDirectory>
<fileSets>
<fileSet>
<directory>${project.basedir}</directory>
<outputDirectory>/</outputDirectory>
<useDefaultExcludes>true</useDefaultExcludes>
<excludes>
<exclude>**/${project.build.directory}/**</exclude>
<exclude>mvnw</exclude>
<exclude>mvnw.cmd</exclude>
<exclude>.mvn/**</exclude>
<exclude>src/**</exclude>
</excludes>
</fileSet>
</fileSets>
</assembly>
The workflow would look similar to the one presented in the Step by step guide to CDC. The only difference is that the producer doesn’t own the contracts anymore. So the consumer and the producer have to work on common contracts in a common repository.
When the consumer wants to work on the contracts offline, instead of cloning the producer code, the consumer team clones the common repository, goes to the required producer’s folder (e.g. com/example/server
) and runs mvn clean install -DskipTests
to install locally the stubs converted from the contracts.
REMEMBER! You need to have Maven installed locally
As a producer it’s enough to alter the Spring Cloud Contract Verifier to provide the URL and the dependency of the JAR containing the contracts:
<plugin>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-contract-maven-plugin</artifactId>
<configuration>
<contractsRepositoryUrl>https://link/to/your/nexus/or/artifactory/or/sth</contractsRepositoryUrl>
<contractDependency>
<groupId>com.example.standalone</groupId>
<artifactId>contracts</artifactId>
</contractDependency>
</configuration>
</plugin>
With this setup the JAR with groupid com.example.standalone
and artifactid contracts will be downloaded from https://link/to/your/nexus/or/artifactory/or/sth
. It will be then unpacked in a local temporary folder and contracts present under the com/example/server
will be picked as the ones used to generate the tests and the stubs. Due to this convention the producer team will know which consumer teams will be broken when some incompatible changes are done.
The rest of the flow looks the same.
That was quite a problem when providing one single base class for all the tests. After some time the mock configurations were enormous! That’s why we’ve added a possibility to map a contract to its test base class.
If your base classes differ between contracts you can tell the Spring Cloud Contract plugin which class should get extended by the autogenerated tests. You have two options:
packageWithBaseClasses
baseClassMappings
The convention is such that if you have a contract under e.g. src/test/resources/contract/foo/bar/baz/
and provide the value of the packageWithBaseClasses
property to com.example.base
then we will assume that there is a BarBazBase
class under com.example.base
package. In other words we take last two parts of package if they exist and form a class with a Base
suffix. Takes precedence over baseClassForTests
. Example of usage in the contracts closure:
packageWithBaseClasses = 'com.example.base'
You can manually map a regular expression of the contract’s package (package, not folder) to fully qualified name of the base class for the matched contract. Let’s take a look at the following example:
baseClassForTests = "com.example.FooBase"
baseClassMappings {
baseClassMapping('.*com.*', 'com.example.ComBase')
baseClassMapping('.*bar.*':'com.example.BarBase')
}
Let’s assume that you have contracts under
src/test/resources/contract/com/
src/test/resources/contract/foo/
By providing the baseClassForTests
we have a fallback in case mapping didn’t succeed (you could also provide the packageWithBaseClasses
as fallback). That way the tests generated from src/test/resources/contract/com/
contracts will be extending the com.example.ComBase
whereas the rest of tests will extend com.example.FooBase
cause they don’t match the base class mapping for bar
folder.
Let’s now look how it looks like for Maven.
To accomplish the same result as the one presented for Gradle you’d have to set your configuration like this:
<plugin>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-contract-maven-plugin</artifactId>
<configuration>
<packageWithBaseClasses>com.example.base</packageWithBaseClasses>
</configuration>
</plugin>
You can manually map a regular expression of the contract’s package to fully qualified name of the base class for the matched contract. You have to provide a list baseClassMappings
of baseClassMapping
that takes a contractPackageRegex
to baseClassFQN
mapping. Let’s take a look at the following example:
<plugin>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-contract-maven-plugin</artifactId>
<configuration>
<baseClassForTests>com.example.FooBase</baseClassForTests>
<baseClassMappings>
<baseClassMapping>
<contractPackageRegex>.*com.*</contractPackageRegex>
<baseClassFQN>com.example.ComBase</baseClassFQN>
</baseClassMapping>
<baseClassMapping>
<contractPackageRegex>.*bar.*</contractPackageRegex>
<baseClassFQN>com.example.BarBase</baseClassFQN>
</baseClassMapping>
</baseClassMappings>
</configuration>
</plugin>
In this blog post we’ve checked what are the new and shiny features in the GA of Spring Cloud Contract. We’ve also checked some history around Accurest to Spring Cloud Contract migration.
Here you can find interesting links related to Spring Cloud Contract Verifier:
]]>Accurest was created because of lack of an easy-to-use tool for doing Consumer Driven Contracts. From our production experience the biggest problem was lack of verification that the defined contract actually does what it says it does. We wanted to ensure that from the contract automatically tests are generated so that we can have a proof that the stubs are reliable. Since there was no such tool the first commit of Accurest took place on 12/2014. The very idea and its implementation was initially set by Jakub Kubrynski and me. The last available version of Accurest was 1.1.0 released on 06/2016 (the docs for the old version are available here). During these 19 months a lot of feedback has been gathered. The tool has received a lot of very good reception and that made us want to work even harder. Many times we have decided to decrease the time required for sleeping so as to fix a bug or develop a new feature in Accurest.
Speaking of features, especially quite a few of them definitely makes Accurest stand out on the “market” of Consumer Driven Contract (CDC) tooling. Out of many the most interesting are:
For more information check out my posts about Stub Runner, Accurest Messaging or just read the docs.
In Pivotal we came to the conclusion that Accurest could become an interesting addition to our Spring Cloud tooling. Due to the increased interest of the community in the Consumer Driven Contracts approach we’ve decided to start the Spring Cloud Contract initiative.
Accurest became Spring Cloud Contract Verifier (note: the name might change in the future) but for the time being will remain in the Codearte repository. It’s becoming the part of Spring Cloud tooling as a mature tool with a growing community around it. Some arguments for that are that it has:
Since we believe very much in the Consumer Driven Contract approach we also want to do the library in a Client Driven way. That means that we (server side) are very open to your feedback (consumer side) and want you be the main driver of changes in the library.
The Accurest project would never come to life without the hard work of the Codearte developers (the order is random):
and obviously everybody who has ever commited something to the project.
If you want to read more about Spring Cloud Contract Verifier just check out the following links.
Today’s post will be about the new stuff that you will be able to profit from in the upcoming 1.1.0
release of Accurest. Also you can profit from most of these features in the 1.1.0.M3
release.
I’ll just quickly go through the features but note that you can read about all of them in more depth in our documentation .
AccuREST started as a library used to stub HTTP calls. In the upcoming 1.1.0
release you will be able to stub messaging functionality too. That’s why the name changes to Accurest. That’s a fantastic name isn’t it? ;)
Also since branding is important, now instead of calling io.codearte.accurest.dsl.GroovyDsl
you can call io.codearte.accurest.dsl.Accurest
:)
It took me quite some time to do this but it was worth it :) Several sleepless nights and now you can profit from defining contracts for messaging. In HTTP we had client
/stub
side and server
/test
side. For messaging we added methods to help discern the differences:
publisher
the side for which the tests will be generatedconsumer
the side for which the messaging endpoints will be stubbedThere are 3 use cases from the message Producer
’s point of view.
Here you can see examples of contracts for those three situations (you can read more about it in the docs ):
The output message can be triggered by calling a method (e.g. a Scheduler was started and a message was sent)
def dsl = Accurest.make {
// Human readable description
description 'Some description'
// Label by means of which the output message can be triggered
label 'some_label'
// input to the contract
input {
// the contract will be triggered by a method
triggeredBy('bookReturnedTriggered()')
}
// output message of the contract
outputMessage {
// destination to which the output message will be sent
sentTo('output')
// the body of the output message
body('''{ "bookName" : "foo" }''')
// the headers of the output message
headers {
header('BOOK-NAME', 'foo')
}
}
}
The output message can be triggered by receiving a message.
def dsl = GroovyDsl.make {
description 'Some Description'
label 'some_label'
// input is a message
input {
// the message was received from this destination
messageFrom('input')
// has the following body
messageBody([
bookName: 'foo'
])
// and the following headers
messageHeaders {
header('sample', 'header')
}
}
outputMessage {
sentTo('output')
body([
bookName: 'foo'
])
headers {
header('BOOK-NAME', 'foo')
}
}
}
There can be only input without any output
def dsl = GroovyDsl.make {
description 'Some Description'
label 'some_label'
// input is a message
input {
// the message was received from this destination
messageFrom('input')
// has the following body
messageBody([
bookName: 'foo'
])
// and the following headers
messageHeaders {
header('sample', 'header')
}
}
}
Here you can see an example of a JUnit generated test for the producer for the input / output scenario:
// given:
AccurestMessage inputMessage = accurestMessaging.create(
"{\\"bookName\\":\\"foo\\"}"
, headers()
.header("sample", "header"));
// when:
accurestMessaging.send(inputMessage, "input");
// then:
AccurestMessage response = accurestMessaging.receiveMessage("output");
assertThat(response).isNotNull();
assertThat(response.getHeader("BOOK-NAME")).isEqualTo("foo");
// and:
DocumentContext parsedJson = JsonPath.parse(accurestObjectMapper.writeValueAsString(response.getPayload()));
assertThatJson(parsedJson).field("bookName").isEqualTo("foo");
We’re sending a message to a destination called input
. next we’re checking if there’s a message at the output
destination. If that’s the case
we’re checking if that message has proper headers and body.
It’s enough to provide the dependency to proper Stub Runner module (check the next section for more information) and tell it which stubs should be downloaded. Yup, that’s it! Stub Runner will download the stubs and prepare stubbed routes.
Sometimes you’ll need to trigger a message somehow in your tests. That’s why we’ve provided the StubTrigger
interface that you can inject! If you’re already familiar with Stub Runner Spring then you could use the StubFinder
bean to find the URL of your dependency. Now StubFinder
also extends the StubTrigger
interface thus you don’t have to inject any additional beans in your tests.
There are multiple ways in which you can trigger a message:
stubFinder.trigger('return_book_1')
stubFinder.trigger('io.codearte.accurest.stubs:camelService', 'return_book_1')
stubFinder.trigger('camelService', 'return_book_1')
stubFinder.trigger()
We provide the following out of the box integrations:
Also we provide all the building blocks to provide a custom integration.
Just by providing the proper dependency
// for Apache Camel
testCompile "io.codearte.accurest:accurest-messaging-camel:${accurestVersion}"
// for Spring Integration
testCompile "io.codearte.accurest:accurest-messaging-integration:${accurestVersion}"
// for Spring Cloud Stream
testCompile "io.codearte.accurest:accurest-messaging-stream:${accurestVersion}"
Your generated tests should just work.
I’ve added a new module of Stub Runner that operates on Spring Boot. Assuming that you’re using Spring Cloud Stream you can create a project that has 2 dependencies:
compile "io.codearte.accurest:stub-runner-boot:${accurestVersion}"
compile "io.codearte.accurest:stub-runner-messaging-stream:${accurestVersion}"
Now if you pass the proper Stub Runner Spring configuration e.g.:
stubrunner.stubs.ids: io.codearte.accurest.stubs:streamService
You will have a running app that exposes HTTP endpoints to
Mariusz Smykuła has done a fantastic job by adding the Accurest Maven Plugin. Now you can add Accurest to your project that runs with Maven. But that’s not all since the Maven Plugin allows you to run the Accurest stubs using the accurest:run
command!
Read the docs to know more!
With messaging coming as a feature I’ve added a bunch of messaging modules. You can read more about the Stub Runner messaging modules here
Another feature that was missing and is really valuable is that now you can explicitly say that you want a particular dependency to be started at a given port. This feature is available since version 1.0.7
but the stub id has been changed in 1.1.0.M4
so be warned ;)
The ids have changed because now you can provide the desired version of the stub that you want to download.
Now you can provide the id of a stub like this:
groupId:artifactId:version:classifier:port
where version, classifier and port are optional.
Where port means the port of the WireMock server.
So if you provide your dependency like this:
stubrunner.stubs.ids: io.codearte.accurest.stubs:streamService:0.0.1-SNAPSHOT:stubs:9090,io.codearte.accurest.stubs:anotherService:+:9095
It will make Stub Runner:
io.codearte.accurest.stubs
, artifactId: streamService
, version: 0.0.1-SNAPSHOT
, classifier: stubs
and register it at port 9090io.codearte.accurest.stubs
, artifactId: anotherService
, latest version, default classifier (stubs
) and register it at port 9095When using the AccurestRule you can add a stub to download and then pass the port for the last downloaded stub.
@ClassRule public static AccurestRule rule = new AccurestRule()
.repoRoot(repoRoot())
.downloadStub("io.codearte.accurest.stubs", "loanIssuance")
.withPort(12345)
.downloadStub("io.codearte.accurest.stubs:fraudDetectionServer:12346");
You can see that for this example the following test is valid:
then(rule.findStubUrl("loanIssuance")).isEqualTo(URI.create("https://localhost:12345").toURL());
then(rule.findStubUrl("fraudDetectionServer")).isEqualTo(URI.create("https://localhost:12346").toURL());
Apart from features we’ve done some technical refactoring.
I’ve migrated the mechanism used to download dependencies from Groovy Grape to Aether. We had a lot of issues with Grape and Aether works very well for now. That’s a backwards incompatible change so if you had some custom Grape configuration then you’ll have to port it to Aether.
We had some problems with explicit and transitive dependencies that got fixed. The Accurest jars should be smaller.
Quite frankly recently when I didn’t code Spring Cloud Sleuth I did a lot around Accurest and messaging so stay tuned! For sure there will be a new post about Consumer Driven Contracts and Messaging.
]]>Wouldn’t it be great to retrieve the value from the JSON via the JSON Path? There you go!
given:
String json = ''' [ {
"some" : {
"nested" : {
"json" : "with value",
"anothervalue": 4,
"withlist" : [
{ "name" :"name1"} ,
{"name": "name2"},
{"anothernested": { "name": "name3"} }
]
}
}
},
{
"someother" : {
"nested" : {
"json" : true,
"anothervalue": 4,
"withlist" : [
{ "name" :"name1"} , {"name": "name2"}
],
"withlist2" : [
"a", "b"
]
}
}
}
]
'''
expect:
JsonPath.builder(json).array().field("some").field("nested").field("json").read(String) == 'with value'
JsonPath.builder(json).array().field("some").field("nested").field("anothervalue").read(Integer) == 4
assertThat(json).array().field("some").field("nested").array("withlist").field("name").read(List) == ['name1', 'name2']
assertThat(json).array().field("someother").field("nested").array("withlist2").read(List) == ['a', 'b']
assertThat(json).array().field("someother").field("nested").field("json").read(Boolean) == true
The JsonVerifiable
extends the JsonReader
that allows you to call the read(Class<T> clazz)
method to retrieve the value from the JSON basing on the JSON Path.
Remember that JSON Assert has its own Gitter channel so in case of questions do not hesitate to contact me there.
]]>Since my leaving the company owning the original UpToDate Gradle Plugin repository, the project is almost not maintained at all. For quite a long time any development was done mostly by me and actually I was the author of most of the its code (like in the case of Stub Runner ). That’s why I’ve decided to fork the code, repackage it and start versioning from 1.0.0.
Gradle plugin that tells you what libs have new versions on Maven Central, so when you come back to a project, you know what you can update.
buildscript {
repositories {
mavenCentral()
}
dependencies {
classpath 'com.toomuchcoding:uptodate-gradle-plugin:1.0.0'
}
}
apply plugin: 'com.toomuchcoding.uptodate'
And now you can run the plugin with
gradle uptodate
For more information just read the project’s Readme.
com.toomuchcoding:uptodate-gradle-plugin
?If you’re using the old version of the code just change
com.ofg
into
com.toomuchcoding
and that should be it :) Oh, and change the version. I’m starting versioning from 1.0.0.
Talk to me at the project’s Gitter.
]]>After releasing Spring Cloud Sleuth as a part of Brixton RC1 we have just released a version 1.0.4 of AccuREST. We’ve fixed a couple of bugs but we’ve introduced a couple of big features including:
This post will describe the latter feature in more depth.
I’ve given quite a few talks about the library called Micro-Infra-Spring where I presented how you can profit from the Stub Runner functionality. Since my leaving the company owning that repository, the project is almost not maintained at all. For quite a long time any development was done mostly by me and actually I was the author of most of the Stub Runner’s code. Due to the aforementioned and the fact that Stub Runner is tightly coupled with AccuREST’s stub generation feature I’ve decided to migrate it to the AccuREST’s repository.
Stub Runner is tightly coupled with the concepts coming from AccuREST. For more information about AccuREST you can check my blog entries or check AccuREST project on Github. If you don’t have a clue what that is I’ll try to do a very fast recap.
AccuREST is a Consumer Driven Contracts verifier in which you define the contract of your API via a Groovy DSL. From that DSL, on the server side, tests are created to check if your contract is telling the truth. From the Stub Runner’s perspective more interesting is the client side. For the client side AccuREST generates WireMock stubs from the provided DSL so that the clients of that API can be provided with reliable stubs.
Now that we remember what AccuREST does we can take a look in more depth at Stub Runner. Let’s assume that we have a following flow of services (btw. this is a screenshot from Zipkin integrated with Spring Cloud Sleuth )
Let’s imagine ourselves as developers of the service2 - the one that calls service3 and service4. Since we’re doing the CDC (Consumer Driven Contracts) approach let’s assume that the stubs of service3 and service4 got already deployed to some Maven repository.
If I’m writing integration tests of service2 I’ll for sure have some points of interaction with service3 and service4. Most likely in the majority of cases I’ll just mock those interactions in my code but it would be valuable to have a real HTTP call done to the other application. Of course I don’t want to download both services and run them only for integration tests - that would be an overkill. That’s why the most preferable solution at this point would be to run the stubs of my collaborators.
Since I’m too lazy to do things manually I’d prefer the stubs to be automatically downloaded for me, the WireMock servers started and fed with the stub definitions.
And that’s exactly what Stub Runner can do for you!
Stub Runner at its core is using Groovy’s Grape mechanism to download the stubs from a given Maven repository. Next it unpacks them to a temporary folder. Let’s assume that you have the following structure of your WireMock stubs inside the stub JAR (example for a service3-stubs.jar
)
├── META-INF
│ └── MANIFEST.MF
└── mappings
└── service3
├── shouldMarkClientAsFraud.json
├── notAWireMockMapping.json
└── shouldSayHello.json
Stub Runner will scan the whole unpacked JAR for any .json
files. There is a convention that stub definitions are placed under the mappings
folder. So it will pick shouldMarkClientAsFraud.json
, notAWireMockMapping.json
and shouldSayHello.json
files.
Next, a WireMock instance is started for each dependency and every found JSON is attempted to be parsed as a WireMock stub definition. Any exceptions at this point are ignored (so assuming that notAWireMockMapping.json
is not a valid WireMock definition, the exception will be suppressed). In our scenario 2 WireMock servers will be started - one for service3
and one for service4
.
That way you don’t have to copy the stubs manually. The stubs are centralized since they are stored in a Maven repository. It’s extremely important cause Stub Runner downloads always the newest version of the stubs so you can be sure that your tests will break the moment someone does an incompatible change.
From the developer’s perspective there are only a handful of Stub Runner’s classes that should be used. In the majority of cases you will use the following ones:
An interface that allows you to find the URL of the started WireMock instance. You can find that URL by
passing the Ivy notation (groupId:artifactId
) or just the artifactId
- Stub Runner will try to take care of the rest.
interface StubFinder {
/**
* For the given groupId and artifactId tries to find the matching
* URL of the running stub.
*
* @param groupId - might be null. In that case a search only via artifactId takes place
* @return URL of a running stub or null if not found
*/
URL findStubUrl(String groupId, String artifactId)
/**
* For the given Ivy notation {@code groupId:artifactId} tries to find the matching
* URL of the running stub. You can also pass only {@code artifactId}.
*
* @param ivyNotation - Ivy representation of the Maven artifact
* @return URL of a running stub or null if not found
*/
URL findStubUrl(String ivyNotation)
/**
* Returns all running stubs
*/
RunningStubs findAllRunningStubs()
}
A structure representing the already running stubs. Give you some helper methods to retrieve Ivy representation of a particular stub, find a port for a stub etc.
A contract for classes that can run the stubs:
interface StubRunning extends Closeable, StubFinder {
/**
* Runs the stubs and returns the {@link RunningStubs}
*/
RunningStubs runStubs()
}
Represents a single instance of ready-to-run stubs. It can run the stubs and will return the running instance of WireMock wrapped in RunningStubs
class. Since it’s implementing StubFinder
can also be queried if the current groupid and artifactid are matching the corresponding running stub.
If you have multiple services for which you want to run the WireMocks with stubs it’s enough to use BatchStubRunner
. It iterates over the given Iterable
of StubRunner
and executes the logic on each of them.
In all the examples below let’s assume that the stubs are stored in the Maven repository available under https://toomuchcoding.com
URL. As service2 I’d like to download the stubs of com.toomuchcoding:service3
and
com.toomuchcoding:service4
services.
Stub Runner comes with a main class (io.codearte.accurest.stubrunner.StubRunnerMain
) which you can run with the following options:
-maxp (--maxPort) N : Maximum port value to be assigned to the
Wiremock instance. Defaults to 15000
(default: 15000)
-minp (--minPort) N : Minimal port value to be assigned to the
Wiremock instance. Defaults to 10000
(default: 10000)
-s (--stubs) VAL : Comma separated list of Ivy representation of
jars with stubs. Eg. groupid:artifactid1,group
id2:artifactid2:classifier
-sr (--stubRepositoryRoot) VAL : Location of a Jar containing server where you
keep your stubs (e.g. https://nexus.net/content
/repositories/repository)
-ss (--stubsSuffix) VAL : Suffix for the jar containing stubs (e.g.
'stubs' if the stub jar would have a 'stubs'
classifier for stubs: foobar-stubs ).
Defaults to 'stubs' (default: stubs)
-wo (--workOffline) : Switch to work offline. Defaults to 'false'
(default: false)
You can run that main class from IDE or build yourself a fat JAR. To do that just call the following command:
./gradlew stub-runner-root:stub-runner:shadowJar -PfatJar
Then inside the build/lib
there will be a fat JAR with classifier fatJar
waiting for you to execute.
Coming back to our example once the fat JAR is built I would just call the following command the retrieve the stubs of service3 and service4 from the Maven repository available at https://toomuchcoding.com
.
java -jar stub-runner-1.0.4-SNAPSHOT-fatJar.jar -sr https://toomuchcoding.com -s com.toomuchcoding:service3:stubs,com.toomuchcoding.service4
Running Stub Runner as a main class makes most sense when you’re running some fast smoke tests on a deployed application where you don’t want to download and run all the collaborators of that application. For more rationale behind such an approach you can check my article about Microservice Deployment
You can use the Stub Runner’s JUnit rule to automatically download and run the stubs during your tests. The AccurestRule
implements the StubFinder
interface thus you can easily find the URLs of the services that you’re interested in.
This is how you could do it with Spock:
class SomeSpec extends Specification {
@ClassRule @Shared AccurestRule rule = new AccurestRule()
.repoRoot('https://toomuchcoding.com')
.downloadStub("com.toomuchcoding", "service3")
.downloadStub("com.toomuchcoding:service4")
def 'should do something useful when service3 is called'() {
given:
URL service3Url = rule.findStubUrl('com.toomuchcoding', 'service3')
expect:
somethingUseful(service3Url)
}
def 'should do something even more useful when service4 is called'() {
given:
URL service4Url = rule.findStubUrl('service4')
expect:
somethingMoreUseful(service4Url)
}
}
or with plain Java JUnit:
public class SomeTest {
@ClassRule public static AccurestRule rule = new AccurestRule()
.repoRoot("https://toomuchcoding.com")
.downloadStub("com.toomuchcoding", "service3")
.downloadStub("com.toomuchcoding:service4");
@Test
public void should_do_something_useful_when_service3_is_called() {
URL service3Url = rule.findStubUrl("com.toomuchcoding", "service3");
somethingUseful(service3Url);
}
@Test
public void should_do_something_even_more_useful_when_service4_is_called() {
URL service4Url = rule.findStubUrl("service4");
somethingMoreUseful(service4Url);
}
}
You can use this rule in any place you want to if we don’t provide any integration with an existing framework.
You can use the Stub Runner’s Spring configuration to download the stubs of your collaborators and run the WireMock server upon Spring context booting. We’re providing the StubRunnerConfiguration
that you can import in your tests. In that configuration we’re registering a StubFinder
bean that you can autowire in your tests.
Having the following application.yaml
file:
stubrunner.stubs.repository.root: https://toomuchcoding.com
stubrunner.stubs.ids: com.toomuchcoding:service3:stubs,com.toomuchcoding.service4
This is how you could do it with Spock
@ContextConfiguration(classes = Config, loader = SpringApplicationContextLoader)
class StubRunnerConfigurationSpec extends Specification {
@Autowired StubFinder stubFinder
def 'should do something useful when service3 is called'() {
given:
URL service3Url = stubFinder.findStubUrl('com.toomuchcoding', 'service3')
expect:
somethingUseful(service3Url)
}
def 'should do something even more useful when service4 is called'() {
given:
URL service4Url = stubFinder.findStubUrl('service4')
expect:
somethingMoreUseful(service4Url)
}
@Configuration
@Import(StubRunnerConfiguration)
@EnableAutoConfiguration
static class Config {}
}
In your tests if you have Spring and don’t have Spring Cloud. Also you can add it in compile time (of course you would have to add some Spring profiles so as not to run it on production) to profit from a “developer” mode of running microservices. That means that if you boot up your application to click around it - all the stubs around you would have already been downloaded and started.
You can use the Stub Runner’s Spring Cloud configuration to profit from the stubbed collaborators when using Spring Cloud’s abstractions over service discovery and when you’re using Netflix Ribbon. Stub Runner Spring Cloud configuration is an AutoConfiguration
so it’s automatically started for you.
Let’s assume that you’re referring to service3 as service3
in your code and to service4 as shouldMapThisNameToService4
. That means that you’re using for example the @LoadBalanced
RestTemplate
in the following way (don’t use field injection as I do in this example!!):
@Component
class SomeClass {
@Autowired @LoadBalanced RestTemplate restTemplate
void doSth() {
// code...
String service3Response = restTemplate.getForObject('https://service3/name', String)
String service4Response = restTemplate.getForObject('https://shouldMapThisNameToService4/name', String)
// more code...
}
}
If the service Id that you’re using to call other services maps exactly to the name of the artifact Id in a Maven repository then you’re lucky and don’t have to do anything to find your running stubs. If however that’s not the case - don’t worry, you’ll just have to map it yourself.
The stubrunner.stubs.idsToServiceIds
property is the root path to a map in which the key is the artifactID of the downloaded stub and the value is the serviceId used in the code.
Having the following application.yaml
file:
stubrunner.stubs.repository.root: https://toomuchcoding.com
stubrunner.stubs.ids: com.toomuchcoding:service3:stubs,com.toomuchcoding.service4
stubrunner.stubs.idsToServiceIds:
service4: shouldMapThisNameToService4
This is how you could do it with Spock
@ContextConfiguration(classes = Config, loader = SpringApplicationContextLoader)
class StubRunnerConfigurationSpec extends Specification {
@Autowired SomeClass someClass
def 'should not explode'() {
when:
someClass.doSth()
expect:
noExceptionThrown()
}
@Configuration
@EnableAutoConfiguration
static class Config {}
}
When you’re using Spring Cloud. You can profit from Stub Runner Spring Cloud
also in “developer” mode as presented in the Stub Runner Spring
section.
You can set the default value of the Maven repository by means of a system property:
-Dstubrunner.stubs.repository.root=https://your.maven.repo.com
The list of configurable properties contains:
Name | Default value | Description |
---|---|---|
stubrunner.port.range.min | 10000 | Minimal value of a port for a WireMock server |
stubrunner.port.range.max | 15000 | Maximum value of a port for a WireMock server |
stubrunner.stubs.repository.root | Address to your M2 repo (will point to local M2 repo if none is provided) | |
stubrunner.stubs.classifier | stubs | Default classifier for the JARs containing stubs |
stubrunner.work-offline | false | Should try to connect to any repo to download stubs (useful if there’s no internet) |
stubrunner.stubs | Default comma separated list of stubs to download |
Stub Runner:
and in 0.2.2 the annoying warning message got removed
Writing JSON Paths to assert JSON is no fun at all… That’s why JSON Assert was created in the first place. One doesn’t always want to use this library to perform assertions though. But what one wants to profit from is the fluent interface to create the JSON Path expression.
That’s why with 0.3.0 you can use the new class called JsonPath
. It has a single static method builder()
with which you can… well… build the JSON Path. Remember to call jsonPath()
to get its String value.
So for instance running this code:
JsonPath.builder().field("some").field("nested").field("anothervalue").isEqualTo(4).jsonPath()
would result in creating the following String JSON Path representation:
$.some.nested[?(@.anothervalue == 4)]
Other examples:
JsonPath.builder().field("some").field("nested").array("withlist").contains("name").isEqualTo("name1").jsonPath() === '''$.some.nested.withlist[*][?(@.name == 'name1')]'''
JsonPath.builder().field("some").field("nested").field("json").isEqualTo("with \"val'ue").jsonPath() === '''$.some.nested[?(@.json == 'with "val\\'ue')]'''
JsonPath.builder().field("some", "nested", "json").isEqualTo("with \"val'ue").jsonPath() === '''$.some.nested[?(@.json == 'with "val\\'ue')]'''
This is a small, handy feature that allows you to write less code. Often you iterate over a JSON that has plenty of fields. With the 0.3.0 release instead of writing:
assertThat(json).field("some").field("nested").field("json").isEqualTo("with \"val'ue")
you can write
assertThat(json1).field("some", "nested", "json").isEqualTo("with \"val'ue")
You get a method that allows you to traverse the JSON fields by passing an array of field names.
Remember that JSON Assert has its own Gitter channel so in case of questions do not hesitate to contact me there.
]]>I’m recently mostly focusing on the Spring Cloud Sleuth project and actually quite gigantic changes happened there since the M5 release. In this short post I’ll show you the rationale and describe briefly the features related to span naming and customizations related to span propagation.
For those who don’t know what Spring Cloud Sleuth is - it’s a library that implements a distributed tracing solution for Spring Cloud. You can check its code at Github.
We’re also trying to be aligned with the concepts, terminology and approaches present in the OpenTracing Project.
I’ll quote the documentation to present some of the basic concepts of distributed tracing.
Span: The basic unit of work. For example, sending an RPC is a new span, as is sending a response to an RPC. Span’s are identified by a unique 64-bit ID for the span and another 64-bit ID for the trace the span is a part of. Spans also have other data, such as descriptions, timestamped events, key-value annotations (tags), the ID of the span that caused them, and process ID’s (normally IP address).
Spans are started and stopped, and they keep track of their timing information. Once you create a span, you must stop it at some point in the future.
Trace: A set of spans forming a tree-like structure. For example, if you are running a distributed big-data store, a trace might be formed by a put request.
Annotation: is used to record existence of an event in time. Some of the core annotations used to define the start and stop of a request are:
cs - Client Sent - The client has made a request. This annotation depicts the start of the span.
sr - Server Received - The server side got the request and will start processing it. If one subtracts the cs timestamp from this timestamp one will receive the network latency.
ss - Server Sent - Annotated upon completion of request processing (when the response got sent back to the client). If one subtracts the sr timestamp from this timestamp one will receive the time needed by the server side to process the request.
cr - Client Received - Signifies the end of the span. The client has successfully received the response from the server side. If one subtracts the cs timestamp from this timestamp one will receive the whole time needed by the client to receive the response from the server.
Ok since now we’re on the same page with the terminology let’s see what’s new in Sleuth.
A really big problem that is there in the distributed tracing world is the issue related to naming spans. Actually that topic can be looked at from two angles.
First one is related to what the name of the span should look like. Should be a long and descriptive name or quite the contrary? As we write in the documentation:
The name should be low cardinality (e.g. not include identifiers).
Finding the name for the span is not that big of a problem from library’s perspective. You just pass on to a span whatever the user provides. But what about the situations in which some operation is deferred in time? Or scheduled at certain intervals?
Second one is related to a bigger issue: for the sake of consistency of passing tracing data, should we enforce creating spans? Should we be eager with that or allow the user to control span creation? Cause in that way we can have a problem how to name this artificial instance.
For RC1 we’ve decided that we will be eager in creating span names - but we will come back to the topic in the future releases.
Ok so we know the why, now let’s move to the how… There is quite a lot of instrumentation going on in Sleuth so sometimes the names of spans could sound artificial (e.g. async for asynchronous operations). When talking about runnables and callables often you’re dealing with code similar to this one:
Runnable runnable = new Runnable() {
@Override public void run() {
// perform logic
}
});
Future<?> future = executorService.submit(runnable);
// ... some additional logic ...
future.get();
What the Runnable
is an operation that you would like to wrap in a span. What should be the name of that span? How can you pass it to the Tracer
so that the span name is set?
To answer those issues we’ve introduced two approaches
@SpanName
annotation for an explicit class that implements Runnable
or Callable
toString()
method resolution of an anonymous instance of either of those interfacesMost likely in the future releases @SpanName
or its modification will be used more heavily to provide explicit names of spans.
Anyways examples could look like those in the documentation. Example for @SpanName
annotated class:
@SpanName("calculateTax")
class TaxCountingRunnable implements Runnable {
@Override public void run() {
// perform logic
}
}
and an anonymous instance:
new TraceRunnable(tracer, spanNamer, new Runnable() {
@Override public void run() {
// perform logic
}
@Override public String toString() {
return "calculateTax";
}
});
Both will have the same span name. Remember that both Runnables
should be wrapped in a TraceRunnable
instance.
It’s pretty obvious that there’s a lot of companies that have already created some form of distributed tracing instrumentation. In Spring Cloud Sleuth we’re expecting the tracing headers to be containing certain names like X-B3-TraceId
for the trace id containing headers or X-B3-SpanId
for the span related one.
One of the first issues that we’ve created was related to support configurable header names but actually we’ve developed it quite late. Anyways with RC1 it’s possible to customize Sleuth in such a way that it’s compatible with your system’s nomenclature. Let’s define two terms before we go any further - Injector
and Extractor
.
In Spring Cloud Sleuth an Injector
is actually a functional interface called SpanInjector
. It has the following method:
void inject(Span span, T carrier);
Its purpose is to take whatever is necessary from a span
and
inject it to the carrier
. Let’s assume that in your system you don’t set the headers for trace id with the name X-B3-TraceId
but you call it correlationId
and mySpanId
for X-B3-SpanId
. Then you would have to override the behavior of Sleuth by registering a custom implementation of the SpanInjector
. Let’s look at the following snippets from the documentation:
class CustomHttpServletResponseSpanInjector implements SpanInjector<HttpServletResponse> {
@Override
public void inject(Span span, HttpServletResponse carrier) {
carrier.addHeader("correlationId", Span.idToHex(span.getTraceId()));
carrier.addHeader("mySpanId", Span.idToHex(span.getSpanId()));
// inject the rest of Span values to the header
}
}
Note that this approach will work with Zipkin only if your values that you’re passing are Zipkin-compatible. That means that the IDs are 64bit numbers.
Also you may wonder why do we convert values using Span.idToHex
. We’ve decided that we want the values of ids in the logs and in the message headers to be the very same values as the one that you can later see in Zipkin. That way you can just copy the value and put it into Zipkin to debug your system.
Once you have the SpanInjector
you have to register it as a bean with @Primary
annotation as presented below:
@Bean
@Primary
SpanInjector<HttpServletResponse> customHttpServletResponseSpanInjector() {
return new CustomHttpServletResponseSpanInjector();
}
In Spring Cloud Sleuth an Extractor
is actually a functional interface called SpanExtractor
. It has the following method:
Span joinTrace(T carrier);
Its purpose is to create a Span from the provided carrier. Let’s have the same assumption as with the SpanInjector
and let’s consider a case where traceId header is named correlationId
and spanId header is mySpanId
. Then we customize the Spring context by providing our own implementation of the SpanExtractor
:
class CustomHttpServletRequestSpanExtractor implements SpanExtractor<HttpServletRequest> {
@Override
public Span joinTrace(HttpServletRequest carrier) {
long traceId = Span.hexToId(carrier.getHeader("correlationId"));
long spanId = Span.hexToId(carrier.getHeader("mySpanId"));
// extract all necessary headers
Span.SpanBuilder builder = Span.builder().traceId(traceId).spanId(spanId);
// build rest of the Span
return builder.build();
}
}
Again note that we’re considering that the values are Zipkin compatible (64bit values for ids). Also note that we’ve assumed that the ids are sent in a hexadecimal form like they are presented in the Zipkin UI. That’s why we used the Span.hexToId
method to convert it back to long again.
In this very short post you could see two quite big features available in the RC1 release. You can check Spring Cloud Sleuth documentation for more information about the integrations and configurations of Sleuth. Actually you can check all the things that have changed in the RC1 release by checking the closed issues and merged PRs.
In case of any questions do not hesitate to ping us on the Gitter channel or file an issue on Github.
]]>