Jaxon incorporates knowledge from large unsupervised corpora, automatically parameterizes its data processing pipelines to fit specific datasets, and ensembles previously-created labeling models for new data-labeling tasks. Compared with traditional human-powered approaches, Jaxon is dramatically faster (minutes vs months) at a fraction of the cost, introduces consistency, removes bias and human error, and enables online learning scenarios, allowing training sets to be updated continuously.


Saves time


Saves money

Icons-ALL-White-BlueCirc_Artboard 23.png



Increases accuracy


Works with Streaming Data


As a customizable solution, Jaxon adapts to each organization’s nuanced data and domain-specific terminology. Training sets are created using existing data, as well as new text streaming in from online and internal sources. Jaxon’s Studio allows users to curate meta model(s), tune pipelines, and ensemble labelers to optimize training sets. With Jaxon, fine-tuning training data sets to optimize model performance is achievable quickly and repeatedly as needed.

New data is being created every day.

Models need to keep pace.

Jaxon allows them to do so.


With Jaxon, users design the factory; they don’t work the assembly line!

Chart_CharacterEncoding_With-Edits (1).jpg

Benchmark: IMDb Sentiment Analysis

Jaxon vastly improves model performance when pre-labeled training data is minimal.

In this benchmark, Jaxon labels improved the accuracy of a GRU (gated recurrent unit) neural network by up to 20%.

Using a series of positive and negative movie reviews from an IMDb dataset, a Jaxon model comprised of bootstrapped labels and pre-labeled training data showed a marked increase in accuracy as compared to ground truth labels alone.