Caminante son tus huellas el camino y nada más: caminante, no hay camino se hace el camino al andar (Caminante No Hay Camino - Joan Manuel Serrat)
viernes, diciembre 29, 2017
Java Champion
miércoles, octubre 25, 2017
Adding Chaos on OpenShift cluster
Ce joli rajolinet, que les oques tonifique, si le fique en une pique, mantindra le pompis net (El baró de Bidet - La Trinca)Music: https://www.youtube.com/watch?v=4JWIbKGe4gA
Follow me at https://twitter.com/alexsotob
martes, octubre 24, 2017
Testing Code that requires a mail server
- Configure your application to use MailHog for SMTP delivery
- View messages in the web UI, or retrieve them with the JSON API
- Optionally release messages to real SMTP servers for delivery
- Docker image with MailHog installed
- Defining a docker-compose file.
- Defining a Container Object.
- Using Container Object DSL.
Notice that putting everything into an object makes this object reusable in other tests and even in other projects. You can create an independent project with all your custom developed container objects and just reuse them importing them as test jar in hosted project.
Code: https://github.com/lordofthejars/mailtest
We keep learning,
Alex.
'Cause I'm kind of like Han Solo always stroking my own wookie, I'm the root of all that's evil yeah but you can call me cookie (Fire Water Burn - Bloodhound Gang)
Etiquetas: arquillian, Docker, email testing
martes, septiembre 19, 2017
Testing code that uses Java System Properties
Polly wants a cracker, I think I should get off her first, I think she wants some water, To put out the blow torch (Polly - Nirvana)
Etiquetas: java, junit testing java
martes, junio 27, 2017
Lifecycle of JUnit 5 Extension Model
So after executing this suite, what it is the output? Let's see it. Notice that for sake of readability I have added some callouts on terminal output.
<1> First test that it is run is AnotherLoggerExtensionTest. In this case there is only one simple test, so the lifecycle of extension is BeforeAll, Test Instance-Post-Processing, Before Each, Before Test Execution, then the test itself is executed, and then all After callbacks.
<2> Then the LoggerExtensionTest is executed. First test is not a parametrized test, so events related to parameter resolution are not called. Before the test method is executed, test instance post-processing is called, and after that all before events are thrown. Finally the test is executed with all after events.
<3> Second test contains requires a parameter resolution. Parameter resolvers are run after Before events and before executing the test itself.
<4> Last test throws an exception. Test Execution Exception is called after test is executed but before After events.
Last thing to notice is that BeforeAll and AfterAll events are executed per test class and not suite.
The JUnit version used in this example is org.junit.jupiter:junit-jupiter-api:5.0.0-M4
We keep learning,
Alex
That's why we won't back down, We won't run and hide, 'Cause these are the things we can't deny, I'm passing over you like a satellite (Satellite - Rise Against)Music: https://www.youtube.com/watch?v=6nQCxwneUwA
Follow me at
Etiquetas: extensions, junit, junit5, test
viernes, junio 23, 2017
Test AWS cloud stack offline with Arquillian and LocalStack
- API Gateway at http://localhost:4567
- Kinesis at http://localhost:4568
- DynamoDB at http://localhost:4569
- DynamoDB Streams at http://localhost:4570
- Elasticsearch at http://localhost:4571
- S3 at http://localhost:4572
- Firehose at http://localhost:4573
- Lambda at http://localhost:4574
- SNS at http://localhost:4575
- SQS at http://localhost:4576
- Redshift at http://localhost:4577
- ES (Elasticsearch Service) at http://localhost:4578
- SES at http://localhost:4579
- Route53 at http://localhost:4580
- CloudFormation at http://localhost:4581
- CloudWatch at http://localhost:4582
- Defining a docker-compose file.
- Defining a Container Object.
- Using Container Object DSL.
Important things to take into consideration:
- You annotate your test with Arquillian runner.
- Use @DockerContainer annotation to attribute used to define the container.
- Container Object DSL is just a DSL that allows you to configure the container you want to use. In this case the localstack container with required port binding information.
- The test just connects to Amazon S3 and creates a bucket and stores some content.
So now you can write tests for your application running on AWS cloud without having to connect to remote hosts, just using local environment.
We keep learning,
Alex
Tú, tú eres el imán y yo soy el metal , Me voy acercando y voy armando el plan , Solo con pensarlo se acelera el pulso (Oh yeah) (Despacito - Luis Fonsi)Music: https://www.youtube.com/watch?v=kJQP7kiw5Fk
Etiquetas: arquillia, arquillian cube, aws, Docker, testing
jueves, junio 22, 2017
Vert.X meets Service Virtualization with Hoverfly
But apart from being able to program expectations, you can also use Hoverfly to capture traffic between both services (in both cases are real services) and persist it.
Then in next runs Hoverfly will use these persisted scripts to emulate traffic and not touch the remote service. In this way, instead of programming expectations, which means that you are programming how you understand the system, you are using real communication data.
This can be summarised in next figures:
First time the output traffic is sent though Hoverfly proxy, it is redirected to real service and it generates a response. Then when the response arrives to proxy, both request and response are stored, and the real response is sent back to caller.
Then in next calls of same method:
The output traffic of Service A is still sent though Hoverfly proxy, but now the response is returned from previous stored responses instead of redirecting to real service.
So, how to connect from HTTP client of Service A to Hoverfly proxy? The quick answer is nothing.
Hoverfly just overrides Java network system properties (https://docs.oracle.com/javase/7/docs/api/java/net/doc-files/net-properties.html) so you don't need to do anything, all communications (independently of the host you put there) from HTTP client will go through Hoverfly proxy.
The problem is what's happening if the API you are using as HTTP client does not honor these system properties? Then obviously all outgoing communications will not pass thorough proxy.
One example is Vert.X and its HTTP client io.vertx.rxjava.ext.web.client.WebClient. Since WebClient does not honor these properties, you need to configure the client properly in order to use Hoverfly.
The basic step you need to do is just configure WebClient with proxy options that are set as system properties.
Notice that the only thing that it done here is just checking if system properties regarding network proxy has been configured (by Hoverfly Java) and if it is the case just create a Vert.X ProxyOptions object to configure the HTTP client.
With this change, you can write tests with Hoverfly and Vert.X without any problem:
In previous example Hoverfly is used as in simulate mode and the request/response definitions comes in form of DSL instead of an external JSON script.
Notice that in this case you are programming that when a request by current service (VillainsVerticle), is done to host crimes and port 9090, using GET HTTP method at /crimes/Gru then the response is returned. For sake of simplicity of current post this method is enough.
You can see source code at https://github.com/arquillian-testing-microservices/villains-service and read about Hoverfly Java at http://hoverfly-java.readthedocs.io/en/latest/
We keep learning,
Alex
No vull veure't, vull mirar-te. No vull imaginar-te, vull sentir-te. Vull compartir tot això que sents. No vull tenir-te a tu: vull, amb tu, tenir el temps. (Una LLuna a l'Aigua - Txarango)Music: https://www.youtube.com/watch?v=BeH2eH9iPw4
Etiquetas: continuous delivery, hoverfly, microservices, reactive, service virtualization, test, testing, vertx
miércoles, mayo 24, 2017
Deploying Docker Images to OpenShift
OpenShift is RedHat's cloud development Platform as a Service (PaaS). It uses Kubernetes as container orchestration (so you can use OpenShift as Kubernetes implementation), but providing some features missed in Kubernates such as automation of the build process of the containers, health management, dynamic provision storage or multi-tenancy to cite a few.
After that you need to login into OpenShift cluster. In case of OpenShift Online using the token provided:
oc login https://api.starter-us-east-1.openshift.com --token=xxxxxxx
Then you need to create a new project inside OpenShift.
oc new-project villains
In this case a new app called crimes is created based on lordofthejars/crimes:1.0 image. After running previous command, a new pod running previous image + a service + a replication controller is created.
After that we need to create a route so the service is available to public internet.
oc expose svc crimes --name=crimeswelcome
oc import-image crimes:1.1 --from=lordofthejars/crimes:1.1
oc patch dc/crimes -p '{"spec": { "triggers":[ {"type": "ConfigChange", "type": "ImageChange" , "imageChangeParams": {"automatic": true, "containerNames":["crimes"],"from": {"name":"crimes:1.1"}}}]}}'
And finally you can do the rollout of the application by using:
oc rollout latest dc/crimes
oc rollback crimes-1
oc delete all --all
Commands: https://gist.github.com/lordofthejars/9fb5f08e47775a185a9b1f80f4af7aff
We keep learning,
Alex.
Yo listen up here's a story, About a little guy that lives in a blue world, And all day and all night and, everything he sees is just blue, Like him inside and outside (Blue - Eiffel 65)Music: https://www.youtube.com/watch?v=68ugkg9RePc
Etiquetas: deployment, devops, Docker, openshift
viernes, mayo 19, 2017
Running Parallel Tests in Docker
- You can have one Docker Host for each parallel test.
- You can reuse the same Docker Host and use Arquillian Cube Star Operator.
- Defining a docker-compose file.
- Defining a Container Object.
- Using Container Object DSL.
But if you don't want to use docker-compose approach, you can also define container programmatically by using Container Object DSL which also supports star operator. In this case it the example looks like:
The approach is the same, but using Container Objects (you need Arquillian Cube 1.4.0 to run it with Container Objects).
To read more about star operator just check http://arquillian.org/arquillian-cube/#_parallel_execution
Source code: https://github.com/lordofthejars/parallel-docker
We keep learning,
Alex.
I can show you the world, Shining, shimmering, splendid, Tell me, princess, now when did, You last let your heart decide? (A Whole New World - Aladdin)Music: https://www.youtube.com/watch?v=sVxUUotm1P4
Etiquetas: arquillian, Docker, testing
viernes, mayo 12, 2017
Testing Spring Data + Spring Boot applications with Arquillian (Part 2)
- The first one is that we are using REST API to prepare data set of the test. The problem here is that the test might fail not because a failure on code under test but because of the preparation of the test (insertion of data).
- The second one is that if POST endpoint changes format/location, then you need to remember to change everywhere in the tests where it is used.
- The last one is that each test should leave the environment as found before execution, so the test is isolated from all executions. The problem is that to do it in this approach you need to delete the previous elements inserted by POST. This means to add DELETE HTTP method which might not be always implemented in endpoint, or it might be restricted to some concrete users so need to deal with special authentication things.
Also population data is stored inside a file, so this means that can be reused in all tests and easily changed in case of any schema update.
Let's see example of Part 1 of the post but updating to use APE.
And the file (pings.json) used for populating Redis instance with data looks like:
Project can be found at https://github.com/arquillian-testing-microservices/pingpongbootredis
We keep learning,
Alex
Y es que no puedo estar así, Las manecillas del reloj, Son el demonio que me tiene hablando solo (Tocado y Hundido - Melendi)Music: https://www.youtube.com/watch?v=1JwAr4ZxdMk
Etiquetas: arquillian persistence extension, integration tests, nosql, nosqlunit, redis, spring boot
martes, mayo 02, 2017
Testing Dockerized SQL Databases
First and second problems are fixed with Arquillian Cube (http://arquillian.org/arquillian-cube/). It manages lifecycle of containers by starting and stopping them automatically before and after test class execution. Also it detects when you are running into a DinD situation and configures started containers accordantly.
Arquillian Cube offers three different ways to define container(s).
- Defining a docker-compose file.
- Defining a Container Object.
- Using Container Object DSL.
For this post, Container Object DSL approach is the one used. To define a container to be started before executing tests and stopped after you only need to write next piece of code.
Flyway is useful here since you can start the Docker container and then apply all migrations to the empty database using Flyway.
The fourth problem can be fixed by using tools like DBUnit. iI puts your database into a known state between test runs by populating database with known data, and cleaning it after the test execution.
Arquillian integrates with both of these tools (Flyway and DBUnit) among others with its extension called Arquillian Persistence Extension (aka APE),
An example on how to use APE with DBUnit is shown in next snippet:
You can use Arquillian runner as shown in dbunit-ftest-example or as shown in previous snippet using a JUnit Rule. Choosing one or other depends on your test requirements.
So how everything fits together in Arquillian so you can boot up a Docker container with a SQL database, such as PostgreSQL, before test class execution, then migrate SQL schema and populate it with data, execute the test method, then clean the whole database so next test method finds a clean database and finally after test class execution, the Docker container is destroyed?
Let's see it in the next example:
Test is not so much complicated and it is pretty much self explanatory of what it is doing in each step . You are creating the Docker container using Arquillian Cube DSL, and also you are configuring the populators by just using Arquillian APE DSL.
So thanks of Arquillian Cube and Arquillian APE you can make your test totally isolated from your runtime, it will be executed always agains the same PostgreSQL database version and each test method execution will be isolated.
You can see full code at https://github.com/arquillian/arquillian-extension-persistence/tree/2.0.0/arquillian-ape-sql/standalone/dbunit-flyway-ftest
We keep learning,
Alex
Ya no me importa nada, Ni el día ni la hora, Si lo he perdido todo, Me has dejado en las sombras (Súbeme la Radio - Enrique Iglésias)Music: https://www.youtube.com/watch?v=9sg-A-eS6Ig
Etiquetas: arquillian, arquillian ape, arquillian cube, dbunit, Docker, flyway, mysql, persistence layer tests, sql
miércoles, abril 26, 2017
Testing Spring Data + Spring Boot applications with Arquillian (Part 1)
In next example you can see how simple is to use Spring Boot and Spring Data Redis.
- Defining a docker-compose file.
- Defining a Container Object.
- Using Container Object DSL.
The full test looks like:
Notice that it is a simple Spring Boot test using their bits and bobs, but Arquillian Cube JUnit Rule is used in the test to start and stop the Redis image.
Last important thing to notice is that test contains an implementation of ApplicationContextInitializer so we can configure environment with Docker data (host and binding port of Redis container) so Spring Data Redis can connect to correct instance.
Last but not least build.gradle file defines required dependencies, which looks like:
You can read more about Arquillian Cube at http://arquillian.org/arquillian-cube/
We keep learning,
Alex
Hercules and his gifts, Spiderman's control, And Batman with his fists, And clearly I don't see myself upon that list (Something just like this - The Chainsmokers & Coldplay)
Music: https://www.youtube.com/watch?v=FM7MFYoylVs
Etiquetas: arquillian, Docker, redis, spring boot, spring data, testing
lunes, abril 10, 2017
Arquillian Persistence with MongoDB and Docker
Ridi, Pagliaccio, Sul tuo amore infranto! Ridi del duol, che t'avvelena il cor! (Vesti la giubba (Pagliacci) - Leoncavallo)Music: https://www.youtube.com/watch?v=Z0PMq4XGtZ4
Etiquetas: arquillian persistence extension, Docker, java, mongodb, nosql, nosqlunit, persistence layer tests, testing
viernes, marzo 24, 2017
3 ways of using Docker Containers for Testing in Arquillian
The first approach is using docker-compose format. You only need to define the docker-compose file required for your tests, and Arquillian Cube automatically reads it, start all containers, execute the tests and finally after that they stop and remove them.
In previous example a docker compose file version 2 is defined (it can be stored in the root of the project, or in src/{main, test}/docker or in src/{main, test}/resources and Arquillian Cube will pick it up automatically), creates the defined network and start the service defined container, executes the given test. and finally stops and removes network and container. The key point here is that this happens automatically, you don't need to do anything manual.
The second approach is using Container Object pattern. You can think of a Container Object as a mechanism to encapsulate areas (data and actions) related to a container that your test might interact with. In this case no docker-compose is required.
In this case you are using annotations to define how the container should looks like. Also since you are using java objects, you can add methods that encapsulates operations with the container itself, like in this object where the operation of checking if a file has been uploaded has been added in the container object.
Finally in your test you only need to annotate it with @Cube annotation.
Notice that you can even create the definition of the container programmatically:
In this case a Dockerfile file is created programmatically within the Container Object and used for building and starting the container.
In this case the approach is very similar to the previous one, but you are using a DSL to define the container.
You've got three ways, the first one is the standard one following docker-compose conventions, the other ones can be used for defining reusable pieces for your tests.
You can read more about Arquillian Cube at http://arquillian.org/arquillian-cube/
We keep learning,
Alex
And did you think this fool could never win, Well look at me, i'm coming back again, I got a taste of love in a simple way, And if you need to know while i'm still standing you just fade away (I'm still Standing - Elton John)Music: https://www.youtube.com/watch?v=ZHwVBirqD2s
Etiquetas: arquillian, container object, Docker, integration tests, patterns, testing
lunes, enero 09, 2017
Develop A Microservice with Forge, WildFly Swarm and Arquillian. Keep It Simple.
In this post we are going to see how to develop a microservice using WildFly Swarm and Forge and testing it with Arquillian and Rest Assured.
First of all go to the directory where you want to store the project and run forge.
After a few seconds, you'll see that Forge is started and you are ready to type commands:
I'm not giving up today, There's nothing getting in my way, And if you knock knock me over, I will get back up again (Get Back Up Again - Trolls)
Music: https://www.youtube.com/watch?v=IFuFm0m2wj0
Etiquetas: arquillian, forge, live demo, microservices, rest, rest assured, testing, wildfly swarm