Skip to content

Releases: tensorlayer/TensorLayer

TensorLayer 1.8.4rc0 ~ 1

05 Apr 06:19
a86008e
Compare
Choose a tag to compare
Pre-release

TL Models - Provides pre-trained VGG16, SqueezeNet and MobileNetV1 in one line of code (by @lgarithm @zsdonghao), more models will be provided soon!

    >>> x = tf.placeholder(tf.float32, [None, 224, 224, 3])
    >>> # get the whole model
    >>> net = tl.models.MobileNetV1(x)
    >>> # restore pre-trained parameters
    >>> sess = tf.InteractiveSession()
    >>> net.restore_params(sess)
    >>> # use for inferencing
    >>> probs = tf.nn.softmax(net.outputs)
  • Extract features and Train a classifier with 100 classes
    >>> x = tf.placeholder(tf.float32, [None, 224, 224, 3])
    >>> # get model without the last layer
    >>> cnn = tl.models.MobileNetV1(x, end_with='reshape')
    >>> # add one more layer
    >>> net = Conv2d(cnn, 100, (1, 1), (1, 1), name='out')
    >>> net = FlattenLayer(net, name='flatten')
    >>> # initialize all parameters
    >>> sess = tf.InteractiveSession()
    >>> tl.layers.initialize_global_variables(sess)
    >>> # restore pre-trained parameters
    >>> cnn.restore_params(sess)
    >>> # train your own classifier (only update the last layer)
    >>> train_params = tl.layers.get_variables_with_name('out')
  • Reuse model
    >>> x1 = tf.placeholder(tf.float32, [None, 224, 224, 3])
    >>> x2 = tf.placeholder(tf.float32, [None, 224, 224, 3])
    >>> # get network without the last layer
    >>> net1 = tl.models.MobileNetV1(x1, end_with='reshape')
    >>> # reuse the parameters with different input
    >>> net2 = tl.models.MobileNetV1(x2, end_with='reshape', reuse=True)
    >>> # restore pre-trained parameters (as they share parameters, we don’t need to restore net2)
    >>> sess = tf.InteractiveSession()
    >>> net1.restore_params(sess)

TensorLayer 1.8.3

22 Mar 17:10
9f756b7
Compare
Choose a tag to compare

This release focuses on model compression and acceleration, feel free to discuss here.

New APIs

  • TenaryDenseLayer, TenaryConv2d, DorefaDenseLayer, DorefaConv2d for Tenary Weight Net and DoReFa-Net (by @XJTUWYD)
  • BinaryDenseLayer, BinaryConv2d, SignLayer, ScaleLayer for BinaryNet (by @zsdonghao)
  • tl.act.htanh for BinaryNet (by @zsdonghao)
  • GlobalMeanPool3d, GlobalMaxPool3d (by @zsdonghao)
  • ZeroPad1d, ZeroPad2d, ZeroPad3d (by @zsdonghao)

New Updates

New Examples

New Discussion

TensorLayer 1.8.3rc0

19 Mar 15:31
4a444ce
Compare
Choose a tag to compare
TensorLayer 1.8.3rc0 Pre-release
Pre-release

New Updates

New Examples

TensorLayer 1.8.2

17 Mar 14:01
Compare
Choose a tag to compare

As this version is more stable, we highly recommend users to update to this version.

Functions

This is an experimental API package for building Binary Nets. We are using matrix multiplication rather than add-minus and bit-count operation at the moment. Therefore, these APIs would not speed up the inferencing, for production, you can train model via TensorLayer and deploy the model into other customized C/C++ implementation (We probably provide users an extra C/C++ binary net framework that can load model from TensorLayer).

Note that, these experimental APIs can be changed in the future

  • Load the Street View House Numbers (SVHN) dataset in 1 line of code (by @zsdonghao)
  • Load Fashion-MNIST in 1 line of code (by @AutuanLiu)
  • SeparableConv2d which performs a depthwise convolution that acts separately on channels, followed by a pointwise convolution that mixes channels. While DepthwiseConv2d performs depthwise convolution only, which allow us to add batch normalization between depthwise and pointwise convolution. (by @zsdonghao)

Updates

Bug Fix

Maintain documentation by @lgarithm @luomai @wagamamaz @zsdonghao

TensorLayer 1.8.1

11 Mar 03:18
1e50e83
Compare
Choose a tag to compare

We highly recommend users to update to 1.8.1:

Updates

TensorLayer 1.8.0

07 Mar 16:00
4a183c6
Compare
Choose a tag to compare

We recommend users to update and report bugs or issues.

Features

>>> x = tf.placeholder("float32", [None, 100])
>>> n = tl.layers.InputLayer(x, name='in')
>>> n = tl.layers.DenseLayer(n, 80, name='d1')
>>> n = tl.layers.DenseLayer(n, 80, name='d2')
>>> print(n)
... Last layer is: DenseLayer (d2) [None, 80]

The outputs can be sliced as follow:

>>> n2 = n[:, :30]
>>> print(n2)
... Last layer is: Layer (d2) [None, 30]

The outputs of all layers can be iterated as follow:

>>> for l in n:
>>>    print(l)
... Tensor("d1/Identity:0", shape=(?, 80), dtype=float32)
... Tensor("d2/Identity:0", shape=(?, 80), dtype=float32)

APIs

Others

TensorLayer 1.8.0rc

24 Feb 18:16
33016c7
Compare
Choose a tag to compare
TensorLayer 1.8.0rc Pre-release
Pre-release

This is a pre-release version. We recommend users to update and report bugs or issues.

Features

APIs

  • Simplify DeformableConv2dLayer into DeformableConv2d (by @zsdonghao)
  • Merge tl.ops into tl.utils (by @luomai)
  • DeConv2d not longer require out_size for TensorFlow 1.3+ (by @zsdonghao)

Others

Maintain TensorLayer 1.7.4

01 Feb 15:17
Compare
Choose a tag to compare

This release includes the following:

TensorLayer 1.7.3

07 Jan 16:14
Compare
Choose a tag to compare

This release includes the following:

TensorLayer 1.7.2

14 Dec 18:36
Compare
Choose a tag to compare

This release includes the following: