Get started Features Tutorials Blog Research
January 28, 2022

VowpalWabbit 9.0.0 Release Notes

author's avatar
Jack Gerrits

Vowpal Wabbit 9 is the first major release in over 6 years! There are a number of usability improvements, new reductions, bug fixes and internal improvements here. The Python package has undergone a bit of a modernization with a more understandable module structure, naming and types. Most changes should be non breaking for standard use cases. See here for the migration guide. There is still improvement to be made with Python but this is a good first step towards a more usable package.

Breaking changes

This release includes some breaking changes detailed below. Please review them to see if they affect you.

Failing to open an input file is no longer ignored

In the past if VW was not able to open an input file it would print a message and continue. This usually meant it would fall back to reading from standard input leading to unintuitive behavior. VW will now treat failure to open an input file as an error, produce the appropriate error message and exit.

Python2.7 and Python3.5 are no longer supported

Python 2.7 and 3.5 are no longer supported. The last release where they were available was 8.11.0. Details on the Python support matrix can be found here. Dropping support for these also allows us to begin to introduce new features such as type hints.

Python module structure

To align with PEP8 and be more consistent we renamed the following modules.

  • vowpalwabbit.DFtoVW ->vowpalwabbit.dftovw
  • vowpalwabbit.sklearn_vw ->vowpalwabbit.sklearn

We did as much as we could to keep the old names accessible. Unfortunately, since some operating systems are case insensitive renaming the DFtoVW module causes some issues. There may be some ways of importing vowpalwabbit.DFtoVW that broke in this migration on some operating systems or setups.

For example, the following will be broken and must be changed.

from vowpalwabbit.DFtoVW import DFtoVW
# Change to ->
from vowpalwabbit.dftovw import DFtoVW

Most of the core objects from vowpalwabbit.pyvw are now accessible in the root module.

For example the following can now be done:

from vowpalwabbit import Workspace
workspace = Workspace(quiet=True)
workspace.learn('1 | a b c')
print(workspace.predict('| b c'))

See the docs for what is in the root module.

save_resume is now the default for model saving and loading

We have seen confusion around the old default behavior that required a flag to be supplied to continue training from a model. We decided (after a call for comment) that it makes more sense for the default behavior of VW be to support continuing training when saving and loading a model. This new model is slightly larger, but more flexible. If the previous behavior is required, --predict_only_model is available. You may want this if you are using the model file in an inference only setup or you have tooling which requires this format.

This will change the format of any readable models by default because more information is saved. If you depend on that format please keep in mind this change.

-q: option removal

-q: has been removed. This is different to -q ::, which certainly has not been removed. -q: may have been added by mistake. It has never actually done anything and is confusing when it is so similar to the very important -q option. It has been deprecated for some time and is now removed.

Python label object creation changes

There is no breaking change if you were using pyvw.Example.get_label. There is only a change if you directly constructed any label objects.

Label objects in Python have had their __init__ and from_example functions changed. __init__ no longer accepts a pyvw.Example object and just accepts that label’s state. from_example is now a static factory function.

Saved models now contain learning rate and power_t

When resuming training a model from a file in the past VW did not remember the learning rate and power_t used in initial training. VW will not use the same values when the model is loaded. The values can be overridden by supplying a new value on the command line when resuming.

Feature counting fixes

A number of fixes were made to the way features are counted which results in different counts in run results, especially around shared examples. This is technically a bug fix but is listed under breaking changes just as an informative notice in case this surprises anyone. Relevant PRs:

Internal changes

The following sets of changes are internal breaking changes. These should not affect you unless you depend on any of the C++ code structure. These changes have been marked in the past.

Highlights

Output options

VW’s output has been cleaned up in this release. During default operation VW produces two different streams of output. The driver which contains some initial information, progressive validation and results. The logger which produces info, warning and error information. In the past logging information was not produced in a consistent way. Logging information now has a consistent format and can be controlled.

  • --quiet - Works the same as before but can be explained as turning off both the driver and log output streams
  • --driver_output_off - Will just turn off the driver but leave logging on
  • --driver_output <stderr|stdout> (default: stderr) - Direct driver output to the specified location
  • --log_output <stderr|stdout|compat> (default: stdout) - Direct logging output to the specified location. Or, if compat is chosen the output location will be what it was in 8.11. Past versions were not consistent about output location and this was added if any user depended on that behavior. log_output
  • --log_level <critical|error|warn|info|off> (default: info) - Log level to enable.

Options reachability warning

If an option is passed which is definitely not used by any enabled reduction VW will now issue a warning.

vw --epsilon 0.5
[warning] Option 'epsilon' depends on another option (or combination of options) which was not supplied. Possible combinations of options which would enable this option are:
	cb_explore_pdf
	warm_cb
	bag, cb_explore_adf
	cb_explore_adf, cover
	cb_explore_adf, first
	cb_explore_adf, synthcover
	cb_explore_adf, rnd
	cb_explore_adf, softmax
	cb_explore_adf
	cb_explore

Documentation site

This is the first release which includes the new documentation site. We plan on iterating on this and converging documentation over time. The tutorials are now hosted here and redirect from their old locations. The Python based tutorials and examples have links at the top of the page to make the content interactive in the browser. Try it out!

Python package now includes type hints

Now that Python 2 is no longer supported we were able to add type hints to the Python code base. These typehints are also checked for each commit in CI. They make the documentation much clearer too.

Python Linux AArch64 binary wheels

Thanks to @odidev the Python package now includes AArch64 binary wheels for Linux.

vowpalwabbit.pyvw class naming

Classes in vowpalwabbit.pyvw have been renamed to match PEP8. The old names are still accessible but are deprecated.

New reduction: FreeGrad

FreeGrad is a new base learner described in this paper. It can be tried where coin or gradient descent may have been used before.

Learn more at the wiki page

New experimental reduction: AutoML

AutoML is a new reduction whose primary goal is to provide users with a hands-off method to get an optimal learning configuration without prior experience using VW or an in-depth understanding of their dataset. A “configuration” is a general term which can be extended to any aspect of VW (enabled reduction, # of passes, etc…) but as of now a configuration will specify the set of namespace interactions used in a contextual bandit problem. More specifically, a configuration specifies a set of interactions which will be excluded from the default configuration -q :: (All quadratic interactions).

Learn more at the wiki page

New experimental reduction: Interaction Grounded Learning

VW’s learning algorithm attempts to minimize loss, and the contextual bandit input format specifically calls for cost. However, in the setting on reinforcement learning and contextual bandits it is common to use reward in a label for a data point. And for rewards the agent wishes to maximize reward. Accidentally supplying a reward in place of a cost for contextual bandit label in VW will result in incorrect learning as minimizing this value is the opposite of what is intended.

This reduction tracks incoming labels and determines whether they are rewards or costs. Note that because positive value are assumed to be rewards and negative values costs, if your dataset is labelled such that positive values are still costs but are used to penalize the learner then this automatic translation will not work.

Learn more at the wiki page

New experimental reduction: Baseline Challenger Contextual Bandit

The reduction builds a CI around the baseline action and use it instead of the policy as the greedy action if its lower bound is higher than the policy expected reward.

Learn more at the pull request

Experimental: Full name interactions

An experimental new feature which allows interactions to be specified as the entire namespace name instead of just the first character was added. It is specified with the option: --experimental_full_name_interactions <arg>. The value of <arg> is the list of namespaces in the interaction separated by |. For example to interaction two namespaces Action and Access: --experimental_full_name_interactions Action|Access. This new feature allows interactions between namespaces where the first character is the same. In the past that would have needed to be achieved through prefixing the namespaces with a unique character.

Python package includes CLI tool

Thanks to @mathcass the Python package includes the CLI tool. There are a few limitations to using this compared to the executable itself which are important to know.

  • Standard input cannot be used (i.e. redirecting from cat into the stdin of the process)
  • Options --onethread and --args cannot be used

Experimental: Privacy Preserving Learning

One of the RLOS projects in 2021 was around privacy preserving learning and was worked on by @manavsinghal157. The feature implements aggregated learning by saving only those features that have seen a minimum threshold of users.

Note: this feature is available behind a compiler flag and is still experimental. See wiki page for instructions.

Learn more at the wiki page

0-indexing for One-Against-All

Previously labels had to be 1-indexed for the oaa, csoaa, and csoaa_ldf reductions. Now these reductions can use 0-indexed labels, and report 0-indexed predictions. VW will dynamically detect the indexing of the examples, but you can also set it explicitly with --indexing 0 or --indexing 1.

Thank you

A huge thank you and welcome to all of the new contributors since the last release:

And of course thank you to existing contributors:

Full changelist