Coverage for /pythoncovmergedfiles/medio/medio/usr/local/lib/python3.8/site-packages/tensorflow/python/ops/numpy_ops/__init__.py: 86%

21 statements  

« prev     ^ index     » next       coverage.py v7.4.0, created at 2024-01-03 07:57 +0000

1# Copyright 2020 The TensorFlow Authors. All Rights Reserved. 

2# 

3# Licensed under the Apache License, Version 2.0 (the "License"); 

4# you may not use this file except in compliance with the License. 

5# You may obtain a copy of the License at 

6# 

7# http://www.apache.org/licenses/LICENSE-2.0 

8# 

9# Unless required by applicable law or agreed to in writing, software 

10# distributed under the License is distributed on an "AS IS" BASIS, 

11# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 

12# See the License for the specific language governing permissions and 

13# limitations under the License. 

14# ============================================================================== 

15"""# tf.experimental.numpy: NumPy API on TensorFlow. 

16 

17This module provides a subset of NumPy API, built on top of TensorFlow 

18operations. APIs are based on and have been tested with NumPy 1.16 version. 

19 

20The set of supported APIs may be expanded over time. Also future releases may 

21change the baseline version of NumPy API being supported. A list of some 

22systematic differences with NumPy is listed later in the "Differences with 

23NumPy" section. 

24 

25## Getting Started 

26 

27Please also see [TensorFlow NumPy Guide]( 

28https://www.tensorflow.org/guide/tf_numpy). 

29 

30In the code snippets below, we will assume that `tf.experimental.numpy` is 

31imported as `tnp` and NumPy is imported as `np` 

32 

33```python 

34print(tnp.ones([2,1]) + np.ones([1, 2])) 

35``` 

36 

37## Types 

38 

39The module provides an `ndarray` class which wraps an immutable `tf.Tensor`. 

40Additional functions are provided which accept array-like objects. Here 

41array-like objects include `ndarrays` as defined by this module, as well as 

42`tf.Tensor`, in addition to types accepted by NumPy. 

43 

44A subset of NumPy dtypes are supported. Type promotion follows NumPy 

45semantics. 

46 

47```python 

48print(tnp.ones([1, 2], dtype=tnp.int16) + tnp.ones([2, 1], dtype=tnp.uint8)) 

49``` 

50 

51## Array Interface 

52 

53The `ndarray` class implements the `__array__` interface. This should allow 

54these objects to be passed into contexts that expect a NumPy or array-like 

55object (e.g. matplotlib). 

56 

57```python 

58np.sum(tnp.ones([1, 2]) + np.ones([2, 1])) 

59``` 

60 

61 

62## TF Interoperability 

63 

64The TF-NumPy API calls can be interleaved with TensorFlow calls 

65without incurring Tensor data copies. This is true even if the `ndarray` or 

66`tf.Tensor` is placed on a non-CPU device. 

67 

68In general, the expected behavior should be on par with that of code involving 

69`tf.Tensor` and running stateless TensorFlow functions on them. 

70 

71```python 

72tnp.sum(tnp.ones([1, 2]) + tf.ones([2, 1])) 

73``` 

74 

75Note that the `__array_priority__` is currently chosen to be lower than 

76`tf.Tensor`. Hence the `+` operator above returns a `tf.Tensor`. 

77 

78Additional examples of interoperability include: 

79 

80* using `with tf.GradientTape()` scope to compute gradients through the 

81 TF-NumPy API calls. 

82* using `tf.distribution.Strategy` scope for distributed execution 

83* using `tf.vectorized_map()` for speeding up code using auto-vectorization 

84 

85 

86 

87## Device Support 

88 

89Given that `ndarray` and functions wrap TensorFlow constructs, the code will 

90have GPU and TPU support on par with TensorFlow. Device placement can be 

91controlled by using `with tf.device` scopes. Note that these devices could 

92be local or remote. 

93 

94```python 

95with tf.device("GPU:0"): 

96 x = tnp.ones([1, 2]) 

97print(tf.convert_to_tensor(x).device) 

98``` 

99 

100## Graph and Eager Modes 

101 

102Eager mode execution should typically match NumPy semantics of executing 

103op-by-op. However the same code can be executed in graph mode, by putting it 

104inside a `tf.function`. The function body can contain NumPy code, and the inputs 

105can be `ndarray` as well. 

106 

107```python 

108@tf.function 

109def f(x, y): 

110 return tnp.sum(x + y) 

111 

112f(tnp.ones([1, 2]), tf.ones([2, 1])) 

113``` 

114Python control flow based on `ndarray` values will be translated by 

115[autograph](https://www.tensorflow.org/code/tensorflow/python/autograph/g3doc/reference/index.md) 

116into `tf.cond` and `tf.while_loop` constructs. The code can be XLA compiled 

117for further optimizations. 

118 

119However, note that graph mode execution can change behavior of certain 

120operations since symbolic execution may not have information that is computed 

121during runtime. Some differences are: 

122 

123* Shapes can be incomplete or unknown in graph mode. This means that 

124 `ndarray.shape`, `ndarray.size` and `ndarray.ndim` can return `ndarray` 

125 objects instead of returning integer (or tuple of integer) values. 

126* `__len__`, `__iter__` and `__index__` properties of `ndarray` 

127 may similarly not be supported in graph mode. Code using these 

128 may need to change to explicit shape operations or control flow 

129 constructs. 

130* Also note the [autograph limitations]( 

131https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/limitations.md). 

132 

133 

134## Mutation and Variables 

135 

136`ndarrays` currently wrap immutable `tf.Tensor`. Hence mutation 

137operations like slice assigns are not supported. This may change in the future. 

138Note however that one can directly construct a `tf.Variable` and use that with 

139the TF-NumPy APIs. 

140 

141```python 

142tf_var = tf.Variable(2.0) 

143tf_var.assign_add(tnp.square(tf_var)) 

144``` 

145 

146## Differences with NumPy 

147 

148Here is a non-exhaustive list of differences: 

149 

150* Not all dtypes are currently supported. e.g. `np.float96`, `np.float128`. 

151 `np.object_`, `np.str_`, `np.recarray` types are not supported. 

152* `ndarray` storage is in C order only. Fortran order, views, `stride_tricks` 

153 are not supported. 

154* Only a subset of functions and modules are supported. This set will be 

155 expanded over time. For supported functions, some arguments or argument 

156 values may not be supported. These differences are generally provided in the 

157 function comments. Full `ufunc` support is also not provided. 

158* Buffer mutation is currently not supported. `ndarrays` wrap immutable 

159 tensors. This means that output buffer arguments (e.g. `out` in ufuncs) are 

160 not supported. 

161* NumPy C API is not supported. NumPy's Cython and Swig integration are not 

162 supported. 

163""" 

164# TODO(wangpeng): Append `np_export`ed symbols to the comments above. 

165 

166# pylint: disable=g-direct-tensorflow-import 

167 

168from tensorflow.python.ops.array_ops import newaxis 

169from tensorflow.python.ops.numpy_ops import np_random as random 

170from tensorflow.python.ops.numpy_ops import np_utils 

171# pylint: disable=wildcard-import 

172from tensorflow.python.ops.numpy_ops.np_array_ops import * # pylint: disable=redefined-builtin 

173from tensorflow.python.ops.numpy_ops.np_arrays import ndarray 

174from tensorflow.python.ops.numpy_ops.np_config import * 

175from tensorflow.python.ops.numpy_ops.np_dtypes import * 

176from tensorflow.python.ops.numpy_ops.np_math_ops import * # pylint: disable=redefined-builtin 

177# pylint: enable=wildcard-import 

178from tensorflow.python.ops.numpy_ops.np_utils import finfo 

179from tensorflow.python.ops.numpy_ops.np_utils import promote_types 

180from tensorflow.python.ops.numpy_ops.np_utils import result_type 

181 

182 

183# pylint: disable=redefined-builtin,undefined-variable 

184@np_utils.np_doc("max", link=np_utils.AliasOf("amax")) 

185def max(a, axis=None, keepdims=None): 

186 return amax(a, axis=axis, keepdims=keepdims) 

187 

188 

189@np_utils.np_doc("min", link=np_utils.AliasOf("amin")) 

190def min(a, axis=None, keepdims=None): 

191 return amin(a, axis=axis, keepdims=keepdims) 

192 

193 

194@np_utils.np_doc("round", link=np_utils.AliasOf("around")) 

195def round(a, decimals=0): 

196 return around(a, decimals=decimals) 

197# pylint: enable=redefined-builtin,undefined-variable