15.7 C
New York

Introducing Keras 3 for R

We are thrilled to introduce keras3, the next version of the Keras R
package. keras3 is a ground-up rebuild of {keras}, maintaining the
beloved features of the original while refining and simplifying the API
based on valuable insights gathered over the past few years.
Keras provides a complete toolkit for building deep learning models in
R—it’s never been easier to build, train, evaluate, and deploy deep
learning models.
Installation
To install Keras 3:
install.packages(“keras3”)
library(keras3)
install_keras()
What’s new:
Documentation
Great documentation is essential, and we’ve worked hard to make sure
that keras3 has excellent documentation, both now, and in the future.
Keras 3 comes with a full refresh of the website:
https://keras.posit.co. There, you will find guides, tutorials,
reference pages with rendered examples, and a new examples gallery. All
the reference pages and guides are also available via R’s built-in help
system.
In a fast moving ecosystem like deep learning, creating great
documentation and wrappers once is not enough. There also need to be
workflows that ensure the documentation is up-to-date with upstream
dependencies. To accomplish this, {keras3} includes two new maintainer
features that ensure the R documentation and function wrappers will stay
up-to-date:

We now take snapshots of the upstream documentation and API surface.
With each release, all R documentation is rebased on upstream
updates. This workflow ensures that all R documentation (guides,
examples, vignettes, and reference pages) and R function signatures
stay up-to-date with upstream. This snapshot-and-rebase
functionality is implemented in a new standalone R package,
{doctether}, which may
be useful for R package maintainers needing to keep documentation in
parity with dependencies.
All examples and vignettes can now be evaluated and rendered during
a package build. This ensures that no stale or broken example code
makes it into a release. It also means all user facing example code
now additionally serves as an extended suite of snapshot unit and
integration tests.
Evaluating code in vignettes and examples is still not permitted
according to CRAN restrictions. We work around the CRAN restriction
by adding additional package build steps that pre-render
examples
and
vignettes.

Combined, these two features will make it substantially easier for Keras
in R to maintain feature parity and up-to-date documentation with the
Python API to Keras.
Multi-backend support
Soon after its launch in 2015, Keras featured support for most popular
deep learning frameworks: TensorFlow, Theano, MXNet, and CNTK. Over
time, the landscape shifted; Theano, MXNet, and CNTK were retired, and
TensorFlow surged in popularity. In 2021, three years ago, TensorFlow
became the premier and only supported Keras backend. Now, the landscape
has shifted again.
Keras 3 brings the return of multi-backend support. Choose a backend by
calling:
use_backend(“jax”) # or “tensorflow”, “torch”, “numpy”
The default backend continues to be TensorFlow, which is the best choice
for most users today; for small-to-medium sized models this is still the
fastest backend. However, each backend has different strengths, and
being able to switch easily will let you adapt to changes as your
project, or the frameworks themselves, evolve.
Today, switching to the Jax backend can, for some model types, bring
substantial speed improvements. Jax is also the only backend that has
support for a new model parallelism distributed training API. Switching
to Torch can be helpful during development, often producing simpler
trackbacks while debugging.
Keras 3 also lets you incorporate any pre-existing Torch, Jax, or Flax
module as a standard Keras layer by using the appropriate wrapper,
letting you build atop existing projects with Keras. For example, train
a Torch model using the Keras high-level training API (compile() +
fit()), or include a Flax module as a component of a larger Keras
model. The new multi-backend support lets you use Keras à la carte.
The ‘Ops’ family
{keras3} introduces a new “Operations” family of function. The Ops
family, currently with over 200
functions,
provides a comprehensive suite of operations typically needed when
operating on nd-arrays for deep learning. The Operation family
supersedes and greatly expands on the former family of backend functions
prefixed with k_ in the {keras} package.
The Ops functions let you write backend-agnostic code. They provide a
uniform API, regardless of if you’re working with TensorFlow Tensors,
Jax Arrays, Torch Tensors, Keras Symbolic Tensors, NumPy arrays, or R
arrays.
The Ops functions:

all start with prefix op_ (e.g., op_stack())
all are pure functions (they produce no side-effects)
all use consistent 1-based indexing, and coerce doubles to integers
as needed
all are safe to use with any backend (tensorflow, jax, torch, numpy)
all are safe to use in both eager and graph/jit/tracing modes

The Ops API includes:

The entirety of the NumPy API (numpy.*)
The TensorFlow NN API (tf.nn.*)
Common linear algebra functions (A subset of scipy.linalg.*)
A subfamily of image transformers
A comprehensive set of loss functions
And more!

Ingest tabular data with layer_feature_space()
keras3 provides a new set of functions for building models that ingest
tabular data: layer_feature_space() and a family of feature
transformer functions (prefix, feature_) for building keras models
that can work with tabular data, either as inputs to a keras model, or
as preprocessing steps in a data loading pipeline (e.g., a
tfdatasets::dataset_map()).
See the reference
page and an
example usage in a full end-to-end
example
to learn more.
New Subclassing API
The subclassing API has been refined and extended to more Keras
types.
Define subclasses simply by calling: Layer(), Loss(), Metric(),
Callback(), Constraint(), Model(), and LearningRateSchedule().
Defining {R6} proxy classes is no longer necessary.
Additionally the documentation page for each of the subclassing
functions now contains a comprehensive listing of all the available
attributes and methods for that type. Check out
?Layer to see what’s
possible.
Saving and Export
Keras 3 brings a new model serialization and export API. It is now much
simpler to save and restore models, and also, to export them for
serving.

save_model()/load_model():A new high-level file format (extension: .keras) for saving and
restoring a full model.
The file format is backend-agnostic. This means that you can convert
trained models between backends, simply by saving with one backend,
and then loading with another. For example, train a model using Jax,
and then convert to Tensorflow for export.
export_savedmodel():Export just the forward pass of a model as a compiled artifact for
inference with TF
Serving or (soon)
Posit Connect. This
is the easiest way to deploy a Keras model for efficient and
concurrent inference serving, all without any R or Python runtime
dependency.
Lower level entry points:

save_model_weights() / load_model_weights():save just the weights as .h5 files.
save_model_config() / load_model_config():save just the model architecture as a json file.

register_keras_serializable():Register custom objects to enable them to be serialized and
deserialized.
serialize_keras_object() / deserialize_keras_object():Convert any Keras object to an R list of simple types that is safe
to convert to JSON or rds.
See the new Serialization and Saving
vignette
for more details and examples.

New random family
A new family of random tensor
generators.
Like the Ops family, these work with all backends. Additionally, all the
RNG-using methods have support for stateless usage when you pass in a
seed generator. This enables tracing and compilation by frameworks that
have special support for stateless, pure, functions, like Jax. See
?random_seed_generator()
for example usage.
Other additions:

New shape()
function, one-stop utility for working with tensor shapes in all
contexts.
New and improved print(model) and plot(model) method. See some
examples of output in the Functional API
guide
All new fit() progress bar and live metrics viewer output,
including new dark-mode support in the RStudio IDE.
New config
family,
a curated set of functions for getting and setting Keras global
configurations.
All of the other function families have expanded with new members:

Migrating from {keras} to {keras3}
{keras3} supersedes the {keras} package.
If you’re writing new code today, you can start using {keras3} right
away.
If you have legacy code that uses {keras}, you are encouraged to
update the code for {keras3}. For many high-level API functions, such
as layer_dense(), fit(), and keras_model(), minimal to no changes
are required. However there is a long tail of small changes that you
might need to make when updating code that made use of the lower-level
Keras API. Some of those are documented here:
https://keras.io/guides/migrating_to_keras_3/.
If you’re running into issues or have questions about updating, don’t
hesitate to ask on https://github.com/rstudio/keras/issues or
https://github.com/rstudio/keras/discussions.
The {keras} and {keras3} packages will coexist while the community
transitions. During the transition, {keras} will continue to receive
patch updates for compatibility with Keras v2, which continues to be
published to PyPi under the package name tf-keras. After tf-keras is
no longer maintained, the {keras} package will be archived.
Summary
In summary, {keras3} is a robust update to the Keras R package,
incorporating new features while preserving the ease of use and
functionality of the original. The new multi-backend support,
comprehensive suite of Ops functions, refined model serialization API,
and updated documentation workflows enable users to easily take
advantage of the latest developments in the deep learning community.
Whether you are a seasoned Keras user or just starting your deep
learning journey, Keras 3 provides the tools and flexibility to build,
train, and deploy models with ease and confidence. As we transition from
Keras 2 to Keras 3, we are committed to supporting the community and
ensuring a smooth migration. We invite you to explore the new features,
check out the updated documentation, and join the conversation on our
GitHub discussions page. Welcome to the next chapter of deep learning in
R with Keras 3!

Enjoy this blog? Get notified of new posts by email:

Posts also available at r-bloggers

Related articles

Recent articles