Posts

Build AI Agents in Java with Embabel: Step-by-Step Guide

Feb 18, 2026
Catherine Edelveis
23.1

Embabel is an agent framework on JVM that mixes LLM and domain models and enables you to integrate sophisticated agentic flows into your application guarded by strong typing and your own code. If you want reliable AI integration into your enterprise application, Embabel is worth looking at.

This article explores what Embabel is, how it is different from other frameworks, and how to integrate Embabel into an existing application, from dependencies to the complete agentic workflow.

The code is available on GitHub.

What is Embabel?

Embabel lets you build agentic workflows based on the concept of goals, actions, and conditions.

The highest level is the goal, which is the result the agent is trying to achieve on behalf of the user. The goal is defined by the developer in a method annotated with @Goal.

To achieve this goal, an agent uses actions. Actions are defined in methods annotated with @Action. Some actions can use LLM, some are deterministic steps like database lookups, scoring, validation, or calling services.

An agent plans the workflow to achieve the goal, but it does so independently of the developer. A plan is formulated dynamically and reassessed after each action.

There are also conditions that the agent observes during the planning. 

This architecture is what makes Ebabel an optimal choice for enterprise JVM applications that want to benefit from Agentic AI. You get

  • Sophisticated planning. It adds a real planning step using a non-LLM algorithm.
  • Extensibility. You can add new actions/domain objects without rewriting the existing code.
  • Strong typing and object orientation. Prompts and code interact through domain objects.
  • LLM mixing. You can easily combine models for cost, privacy, or performance tradeoffs, and Embabel will pick the right one for the task and requirements.

In addition, Embabel is designed on Spring and JVM, which lets it benefit from the existing enterprise features, and be testable end-to-end.

Embabel vs Spring AI

At this point, you may have a logical question: why not just use Spring AI? How does Embabel differ from Spring AI?

The thing is that Embabel and Spring AI are different layers of the stack. Spring AI is primarily an AI integration framework for Spring apps, which focuses on giving you portable abstractions for models, vector stores, tools/function calling, etc.

Embabel, on the other hand, is a higher level API that can use these underlying capabilities but adds a higher-level execution model for goal-driven workflows. You define actions over domain objects and Embabel’s platform can plan sequences of actions using a GOAP (Goal-Oriented Action Planning) planner, rather than you hard-coding a single fixed chain.

In practice, Spring AI helps you talk to models and wire AI components into your app. Embabel helps you build agents that decide which actions to run to reach a goal based on the input/output types and replan on the fly.

How to Integrate Embabel into a Java App

What We Will Build

We will take a cyberpunk-themed demo as a baseline. It stores civilians with their implants, collects monitoring stats from the implants, and searches for monitoring logs based on location and time window.

What we want to do is integrate the implant incident triage workflow. Someone reports unusual telemetry in a place and time window, and the system turns that into a full incident case with risk level, affected devices, likely cause, and a detailed containment plan. 

Prerequisites

  • Spring Boot 3.x. At the time of writing this article. Embabel doesn’t yet support Spring boot 4.x.
  • Java 17+. This demo was built using Liberica JDK 25 recommended by Spring.
  • Docker and Docker Compose for spinning up the database instance in a container.
  • Your favorite IDE.

Add the Dependencies

Let’s start with adding the necessary dependencies. We will use Ollama for this demo, but you can use Embabel with different LLMs or even mix them. So, we will need an embabel-ollama-starter. Also, we need an embabel-agent-starter-shell and a spring-shell-starter as we will run the app in a shell. You can also use the Embabel’s MCP starter or basic starter.

<properties>
        <java.version>25</java.version>
        <embabel-agent.version>0.3.4-SNAPSHOT</embabel-agent.version>
        <spring-shell.version>3.4.0</spring-shell.version>
</properties>

<dependencies>
        <dependency>
            <groupId>com.embabel.agent</groupId>
            <artifactId>embabel-agent-starter-ollama</artifactId>
            <version>${embabel-agent.version}</version>
        </dependency>
        <dependency>
            <groupId>com.embabel.agent</groupId>
            <artifactId>embabel-agent-starter-shell</artifactId>
            <version>${embabel-agent.version}</version>
        </dependency>
        <dependency>
            <groupId>org.springframework.shell</groupId>
            <artifactId>spring-shell-starter</artifactId>
        </dependency>
 </dependencies>

As we will run the app via shell, we need to disable spring web application and enable interactive mode of Spring Shell. You can also configure Embabel in application.properties. For instance, add the default model it will use.

spring.main.web-application-type=none
spring.shell.interactive.enabled=true

embabel.models.default-llm=llama3.1:8b

Create the First Action

Let’s start with something basic. Right now, we are not showcasing Embabel’s power, but rather, getting to know the tool. We will enhance this logic later on.

We will add a single action at first. The agent will receive the user input and provide an IncidentAssessment object as an output. For that, it will parse the input into an IncidentSignal object, find all affected implants in the database, and calculate the risk level based on the written logic.

public record IncidentSignal(
        @NotNull double longitude,
        @NotNull double latitude,
        @Positive double radiusMeters,
        @NotNull @Past LocalDateTime from,
        @NotNull @Past LocalDateTime to,
        @NotNull @NotEmpty String metric,
        @Positive double threshold
) { }

public enum RiskLevel {
    LOW,
    MEDIUM,
    HIGH,
    CRITICAL
}

public record IncidentAssessment(
        IncidentSignal signal,
        int numberOfLogs,
        RiskLevel riskLevel
) { }

The only LLM related task in this case is to parse the user input. The logic for the database lookup and the risk assessment is deterministically defined in the application code. The output of agent work is a Java object, RiskAssessment, which provides more reliability to the agent’s response.

Let’s create a class called IncidentTriageAgent. The @Agent annotation marks this class as an Embabel agent component. In practice, it’s also a Spring bean. The description is metadata used for discovery, documentation, and potentially for planning or selection when multiple agents exist. So, Embabel knows that this agent can investigate telemetry anomalies.

@Agent(description = "Investigates and assesses implant telemetry anomalies in a geo/time window using MongoDB logs")
public class IncidentTriageAgent {
}

Next, we inject ImplantMonitoringLogService. It is the “tool” here, but not an LLM tool. Rather, it’s the domain service that queries MongoDB. Embabel is ok with actions calling a normal application service.

private final ImplantMonitoringLogService logService;

public IncidentTriageAgent(ImplantMonitoringLogService logService) {
      this.logService = logService;
}

The action is one method that achieves a goal. So, let’s create the triageIncidentSignal() method that returns IncidentAssessment — the structured output of this action. Two annotations are important here:

  • @Action, which means that this method is an executable step in Embabel’s world.
  • @AchievesGoal, which means that this action is also considered a goal-completing action. It means that producing an IncidentAssessment is “done” for the described goal.

Parameters:

  • UserInput, which is the raw user message (from CLI, chat, etc.).
  • OperationContext, which is the Embabel’s runtime context. This is how you access AI features.
@AchievesGoal(description = "Parses an incident signal and assigns a risk level")
@Action
public IncidentAssessment triageIncidentSignal(UserInput input, OperationContext context) {
}

In this method, we first use LLM to parse the input into the IncidentSignal object by calling context.ai().withDefaultLlm().createObject().formatted(). What is it doing?

  • It sends the prompt and user message to the default LLM configured for the app.
  • It asks the model to output something that can be deserialized into the IncidentSignal object.
IncidentSignal signal = context.ai().withDefaultLlm().createObject(
        """
        Extract an IncidentSignal from the user's message.

        Output rules:
        - lon is a number in [-180, 180]
        - lat is a number in [-90, 90]
        - radiusMeters is a number in meters
        - from/to are ISO-8601 LocalDateTime (e.g. 2026-02-02T02:00:00)
        - metric is one of: neuralLatencyMs, cpuUsagePct, powerUsageUw
        - threshold is a finite number

        User message:
        %s
        """.formatted(input.getContent()), IncidentSignal.class);

As a result, instead of getting back an unstructured blob of text that you then regex, you get a typed object with fields like:

  • longitude, latitude
  • radiusMeters
  • from, to (as LocalDateTime per your rules)
  • metric (must be one of the allowed strings)
  • threshold (number)

We are basically telling LLM to behave like a parser. Then, we deterministically query MongoDB logs using our domain service. 

Map<String, List<ImplantMonitoringLog>> logs = logService.findLogsByAreaAndTime(
        toSpringPoint(signal.longitude(), signal.latitude()),
        signal.radiusMeters(),
        signal.from(),
        signal.to());

Then, we classify the risk, also deterministically.

RiskLevel risk = classifyRisk(logs, signal);

After that, we build the IncidentAssessment object based on the data we retrieved and calculated and return to the user.

return new IncidentAssessment(signal, logs.size(), risk);

The whole implementation:

@Agent(description = "Investigates and assesses implant telemetry anomalies in a geo/time window using MongoDB logs")
public class IncidentTriageAgent {

    private final ImplantMonitoringLogService logService;

    public IncidentTriageAgent(ImplantMonitoringLogService logService) {
        this.logService = logService;
    }

    @AchievesGoal(description = "Parses an incident signal and assigns a risk level")
    @Action
    public IncidentAssessment triageIncidentSignal(UserInput input, OperationContext context) {

        IncidentSignal signal = context.ai().withDefaultLlm().createObject(
                """
                Extract an IncidentSignal from the user's message.

                Output rules:
                - lon is a number in [-180, 180]
                - lat is a number in [-90, 90]
                - radiusMeters is a number in meters
                - from/to are ISO-8601 LocalDateTime (e.g. 2026-02-02T02:00:00)
                - metric is one of: neuralLatencyMs, cpuUsagePct, powerUsageUw
                - threshold is a finite number

                User message:
                %s
                """.formatted(input.getContent()), IncidentSignal.class);

        Map<String, List<ImplantMonitoringLog>> logs = logService.findLogsByAreaAndTime(
                toSpringPoint(signal.longitude(), signal.latitude()),
                signal.radiusMeters(),
                signal.from(),
                signal.to());

        RiskLevel risk = classifyRisk(logs, signal);

        return new IncidentAssessment(signal, logs.size(), risk);
    }

    private static RiskLevel classifyRisk(
            Map<String, List<ImplantMonitoringLog>> logs,
            IncidentSignal signal) {

        if (logs.isEmpty()) return RiskLevel.LOW;

        long distinctImplants = logs.size();

        long exceedCount = logs.values()
                .stream()
                .flatMap(List::stream)
                .mapToDouble(log -> getMetricValue(log, signal.metric()))
                .filter(value -> value >= signal.threshold())
                .count();


        if (exceedCount >= 60 && distinctImplants >= 5) return RiskLevel.CRITICAL;
        if (exceedCount >= 30 && distinctImplants >= 3) return RiskLevel.HIGH;
        if (exceedCount >= 10) return RiskLevel.MEDIUM;
        return RiskLevel.LOW;
    }

    private static double getMetricValue(ImplantMonitoringLog log, String metric) {
        return switch (metric) {
            case "neuralLatencyMs" -> log.getNeuralLatencyMs();
            case "cpuUsagePct" -> log.getCpuUsagePct();
            case "powerUsageUw" -> log.getPowerUsageUw();
            default -> throw new IllegalArgumentException("Unsupported metric: " + metric);
        };
    }

    private static Point toSpringPoint(double lon, double lat) {
        return new Point(lon, lat);
    }

}

The code works as desired if we run it, but this is too easy! Right now, we are using Embabel as a fancy JSON parser, which is an overkill.

If we want to truly see its power, which is goal-driven planning across multiple typed steps with tool boundaries and adaptive replanning, we must give it something to work with.

For that, we should turn the current one-step triage into a multi-action incident workflow where the agent chooses what to do next based on what it learns.

Build a Multi-Action Workflow

Let’s enrich our incident domain model and add some new classes such as RootCauseHypothesis, EstimatedBlastRadius, ContainmentPlan, IncidentCase, and some others. I won’t paste their implementations here so as not to turn the article into a paper roll. You can study them in the repo.

Now, let’s turn our agent into a pipeline that takes an incident report and turns it into a complete incident case. The goal will be to produce an IncidentCase object with the initial incident signal, assessment, list of affected implants, blast radius, hypothesis as to what might have caused the incident, and a containment plan with detailed steps.

Step 1: Parse the User Message into Structured Data (LLM-Driven Step)

The parseIncidentSignal() method will be our entry point.

@Action(description = "Parse user's message into an IncidentSignal")
public IncidentSignal parseIncidentSignal(UserInput input, OperationContext context) {
}

It takes raw text from the user, like “lat/lon, radius, time window, metric, threshold”. Then, it calls the default LLM and says: “Extract an IncidentSignal object, and follow these rules.” Those rules are important because they constrain the output and increase the reliability of parsing:

  • Valid latitude and longitude ranges;
  • ISO LocalDateTime format. LocalDateTime is alright for a small demo, but it is recommended to use other formats with databases such as timestamps with UTC or zoned times. 
  • The metric must be one of three allowed values;
  • The threshold must be a real number.
@Action(description = "Parse user's message into an IncidentSignal")
public IncidentSignal parseIncidentSignal(UserInput input, OperationContext context) {
    return context.ai().withDefaultLlm().createObject(
            """
                    Extract an IncidentSignal from the user's message.
                    
                    Output rules:
                    - lon is a number in [-180, 180]
                    - lat is a number in [-90, 90]
                    - radiusMeters is a number in meters
                    - from/to are ISO-8601 LocalDateTime (e.g. 2026-02-02T02:00:00)
                    - metric is one of: neuralLatencyMs, cpuUsagePct, powerUsageUw
                    - threshold is a finite number
                    
                    User message:
                    %s
                    """.formatted(input.getContent()), IncidentSignal.class);

}

As a result, we go from the unstructured text to a typed object, IncidentSignal.

Step 2: Pull the Logs and Compute a Risk Level

The next action, the triageIncident() method, takes that IncidentSignal and determines the risk level.

@Action(description = "Classify risk level for a signal using logs")
public IncidentAssessment triageIncident(IncidentSignal signal) {
    Map<String, List<ImplantMonitoringLog>> logs = extractLogs(signal);
    RiskLevel risk = classifyRisk(logs, signal);
    return new IncidentAssessment(signal, logs.size(), risk);
}

First, it calls the extractLogs() method, which queries MongoDB logs using:

  • location (lon/lat converted to a Spring Point);
  • radius in meters;
  • from/to timestamps.
private Map<String, List<ImplantMonitoringLog>> extractLogs(IncidentSignal signal) {
    return logService.findLogsByAreaAndTime(
            toSpringPoint(signal.longitude(), signal.latitude()),
            signal.radiusMeters(),
            signal.from(),
            signal.to());
}

That comes back as a map, where key is the implant serial number, and value is a list of logs for that implant.

Then, we classify risk in the classifyRisk() method. Here, as well, we don’t use LLM, only pure Java.

We count:

  • how many implants are involved;
  • how many log entries exceed the threshold for the chosen metric.

Then, we apply deterministic thresholds:

  • CRITICAL for lots of exceedances and many implants;
  • HIGH for a smaller list but still spread;
  • MEDIUM for at least 10 exceedances;
  • Otherwise, LOW.
private static RiskLevel classifyRisk(
        Map<String, List<ImplantMonitoringLog>> logs,
        IncidentSignal signal) {

    if (logs.isEmpty()) return RiskLevel.LOW;

    long distinctImplants = logs.size();

    long exceedCount = logs.values()
            .stream()
            .flatMap(List::stream)
            .mapToDouble(log -> getMetricValue(log, signal.metric()))
            .filter(value -> value >= signal.threshold())
            .count();

    if (exceedCount >= 60 && distinctImplants >= 5) return RiskLevel.CRITICAL;
    if (exceedCount >= 30 && distinctImplants >= 3) return RiskLevel.HIGH;
    if (exceedCount >= 10) return RiskLevel.MEDIUM;
    return RiskLevel.LOW;
}

private static double getMetricValue(ImplantMonitoringLog log, String metric) {
    return switch (metric) {
        case "neuralLatencyMs" -> log.getNeuralLatencyMs();
        case "cpuUsagePct" -> log.getCpuUsagePct();
        case "powerUsageUw" -> log.getPowerUsageUw();
        default -> 0.0;
    };
}

Finally, we can return an IncidentAssessment object in the original action method.

Step 3: Find Affected Implants and Rank Them

The next action is to find all affected implants.

@Action(description = "Find implants affected by the anomaly and assign anomaly scores")
public List<AffectedImplant> findAffectedImplants(IncidentSignal signal) {

    Map<String, List<ImplantMonitoringLog>> logs = extractLogs(signal);

    return logs.entrySet().stream()
            .map(entry -> toAffectedImplant(entry.getKey(), entry.getValue(), signal))
            .sorted(Comparator.comparingDouble(AffectedImplant::anomalyScore).reversed())
            .toList();

}

The findAffectedImplants() method also calls the extractLogs() method. Then, it converts each map entry into an AffectedImplant. That happens in the toAffectedImplant() method. Here, for each implant, we:

  • Compute an anomaly score;
  • Enrich the record with domain data: lot number, model, civilian ID.

Then, we sort all implants by anomaly score descending.

private AffectedImplant toAffectedImplant(
        String serialNumber,
        List<ImplantMonitoringLog> logsPerImplant,
        IncidentSignal signal) {


    if (logsPerImplant == null || logsPerImplant.isEmpty()) {
        // No logs means no evidence; score 0, rest unknown.
        return new AffectedImplant(
                serialNumber,
                null,
                null,
                null,
                0.0);
    }

    double anomalyScore = calculateAnomalyScore(logsPerImplant, signal);

    Optional<Civilian> civilian = civilianService.findCivilianByImplantSerialNumber(serialNumber);
    if (civilian.isEmpty()) {
        throw new RuntimeException("No civilian found for implant serial number " + serialNumber);
    }

    Civilian c = civilian.get();
    String civilianNationalId = c.getNationalId();

    Implant implant = c.getImplants()
            .stream()
            .filter(i -> serialNumber.equals(i.getSerialNumber()))
            .findFirst()
            .orElseThrow();


    String lotNumber = String.valueOf(implant.getLotNumber());
    String model = implant.getModel();

    return new AffectedImplant(
            serialNumber,
            lotNumber,
            model,
            civilianNationalId,
            anomalyScore);

}

private double calculateAnomalyScore(List<ImplantMonitoringLog> logsPerImplant, IncidentSignal signal) {

    double threshold = signal.threshold();
    if (threshold <= 0.0) return 0.0;

    double max = logsPerImplant.stream()
            .mapToDouble(l -> getMetricValue(l, signal.metric()))
            .max()
            .orElse(0.0);

    if (max <= threshold) return 0.0;

    double score = (max - threshold) / threshold; // exceed ratio
    return Math.min(1.0, score);
}

So, now we know not only which implants are affected, but also which of them are affected to the worst degree.

Step 4: Generate a Root Cause Hypothesis (LLM-Driven Step)

The next action, the makeRootCauseHypothesis() method, is where we will use the LLM for reasoning as we will ask it to hypothesize on the anomaly causes based on the provided evidence.

@Action(description = "Infer a root cause hypothesis from the evidence")
public RootCauseHypothesis makeRootCauseHypothesis(IncidentSignal signal,
                                                   IncidentAssessment assessment,
                                                   List<AffectedImplant> affectedImplants,
                                                   OperationContext context) {

    return context.ai().withDefaultLlm().createObject(
            """
                    Based on the incident details, choose a root cause hypothesis.
                    
                    Rules:
                    - type must be one of: FIRMWARE_REGRESSION, BAD_LOT, ATTACK_PATTERN, ENVIRONMENTAL
                    - confidence is 0..1
                    - evidence is a short bullet list of specific signals from the inputs
                    
                    IncidentSignal: %s
                    Triage: %s
                    Top affected implants: %s
                    """.formatted(signal, assessment, affectedImplants.stream().limit(10).toList()),
            RootCauseHypothesis.class
    );
}

The inputs are:

  • The incident signal;
  • The risk assessment;
  • The top 10 affected implants.

The LLM must return a typed RootCauseHypothesis with:

  • A type enum: firmware regression, bad lot, attack pattern, or environmental case;
  • Confidence between 0 and 1;
  • A short evidence list.

This is intentionally bounded so that the LLM could pick a structured hypothesis.

Step 5: Generate a Containment Plan (LLM with Constraints)

The action for the planContainment() method creates the response plan. Here, we also use the LLM, but with constraints.

@Action(description = "Create a containment plan based on the hypothesis and blast radius")
public ContainmentPlan planContainment(
            IncidentAssessment assessment,
            RootCauseHypothesis hypothesis,
            List<AffectedImplant> affectedImplants,
            OperationContext context) {

        boolean requiresApproval =
                assessment.riskLevel() == RiskLevel.HIGH
                        || assessment.riskLevel() == RiskLevel.CRITICAL
                        || hypothesis.type() == HypothesisType.ATTACK_PATTERN;

        EstimatedBlastRadius radius = estimateRadius(assessment.signal(), affectedImplants);

        return context.ai().withDefaultLlm().createObject(
                """
                Produce a ContainmentPlan JSON object.
    
                Rules:
                - steps must be a list of objects like: { "text": "..." }
                - 4-8 steps max, short imperative text
    
                Inputs:
                - riskLevel: %s
                - hypothesis: %s
                - blastRadius: %s
                """.formatted(assessment.riskLevel(), hypothesis, radius),
                ContainmentPlan.class
        );
}

First, we decide whether the plan needs approval. We require approval if the risk is HIGH or CRITICAL, or the hypothesis looks like an ATTACK_PATTERN.

Then, we estimate the blast radius in the estimateRadius() method.

private EstimatedBlastRadius estimateRadius(IncidentSignal signal,
                                            List<AffectedImplant> affectedImplants) {

    if (affectedImplants == null) affectedImplants = List.of();

    int affectedEstimate = affectedImplants.size();

    List<String> affectedLots = affectedImplants.stream()
            .map(AffectedImplant::lotNumber)
            .filter(s -> s != null && !s.isBlank())
            .distinct()
            .limit(5)
            .toList();

    List<String> affectedModels = affectedImplants.stream()
            .map(AffectedImplant::model)
            .filter(s -> s != null && !s.isBlank())
            .distinct()
            .limit(5)
            .toList();

    String geoSummary = "Within %.0fm of (%.5f, %.5f)"
            .formatted(signal.radiusMeters(), signal.latitude(), signal.longitude());

    String timeSummary = "From %s to %s".formatted(signal.from(), signal.to());

    return new EstimatedBlastRadius(
            affectedEstimate,
            affectedLots,
            affectedModels,
            geoSummary,
            timeSummary
    );
}

That method produces:

  • How many implants are affected;
  • Affected lots, up to five;
  • Affected models, up to five;
  • A geo summary string;
  • A time summary string.

Only after that, we call the LLM and force it into a structured ContainmentPlan with 4 to 8 steps, short imperative instructions, and step objects like { "text": "..." }.

This way, we will get a usable plan.

Step 6: Assemble the Incident Case (Goal)

The last action, the buildIncidentCase() method, is marked with the @AchievesGoal annotation. This is the “done” step that produces the final output object IncidentCase.

@AchievesGoal(description = "Investigate an incident signal and produce a complete incident case",
        examples = {
                "Investigate telemetry anomalies near (lon,lat) in a time window and propose containment",
                "Assess implant incident risk and recommend actions"
        })
@Action(description = "Assemble a complete IncidentCase")
public IncidentCase buildIncidentCase(
        IncidentSignal signal,
        IncidentAssessment assessment,
        List<AffectedImplant> affected,
        RootCauseHypothesis hypothesis,
        ContainmentPlan plan) {

    return new IncidentCase(
            UUID.randomUUID().toString(),
            Instant.now(),
            signal,
            assessment,
            affected,
            hypothesis,
            plan
    );
}

The incident case bundles everything the agent gathered so far:

  • ID;
  • Timestamp;
  • The original signal;
  • Incident assessment;
  • Affected implants;
  • Hypothesis;
  • Containment plan.

So, one user message becomes a complete incident ticket.

Run the application and paste something like that into the Embabel console:

x "Center: lat 40.7580 lon -73.9855, radius 1200m, yesterday 13:00–23:00, metric neuralLatencyMs, threshold 120"

You will see in the logs how the agent is planning to reach the goal, and as an output, you will get the complete incident case in a JSON format.

Congratulations, we have built an agentic workflow on the JVM, where the LLM is used for the squishy parts such as parsing, hypothesis, and planning, and Java handles the hard truth of querying, scoring, ranking, and rules.

And that’s the whole point: you get an agent that’s smart, but not uncontrolled.

Conclusion

Embabel is a great fit when you want to spice up your application with agentic behavior without giving up JVM reliability. By modeling the workflow as typed goals and actions, we use the LLM only where it helps, for instance, for parsing and bounded reasoning, and keep the rest deterministic.

If you want more content like this on agentic workflows and modern Java development, subscribe to our blog.

 

Subcribe to our newsletter

figure

Read the industry news, receive solutions to your problems, and find the ways to save money.

Further reading