BooleanImage¶
-
class
menpo.image.
BooleanImage
(mask_data, copy=True)[source]¶ Bases:
Image
A mask image made from binary pixels. The region of the image that is left exposed by the mask is referred to as the ‘masked region’. The set of ‘masked’ pixels is those pixels corresponding to a
True
value in the mask.Parameters: - mask_data (
(M, N, ..., L)
ndarray) – The binary mask data. Note that there is no channel axis - a 2D Mask Image is built from just a 2D numpy array of mask_data. Automatically coerced in to boolean values. - copy (bool, optional) – If
False
, the image_data will not be copied on assignment. Note that if the array you provide is not boolean, there will still be copy. In general this should only be used if you know what you are doing.
-
as_PILImage
()¶ Return a PIL copy of the image. Depending on the image data type, different operations are performed:
dtype Processing uint8 No processing, directly converted to PIL bool Scale by 255, convert to uint8 float32 Scale by 255, convert to uint8 float64 Scale by 255, convert to uint8 OTHER Raise ValueError Image must only have 1 or 3 channels and be 2 dimensional. Non uint8 images must be in the rage
[0, 1]
to be converted.Returns: pil_image (PILImage) – PIL copy of image
Raises: ValueError
– If image is not 2D and 1 channel or 3 channels.ValueError
– If pixels data type is not float32, float64, bool or uint8ValueError
– If pixels data type is float32 or float64 and the pixel range is outside of[0, 1]
-
as_greyscale
(mode='luminosity', channel=None)¶ Returns a greyscale version of the image. If the image does not represent a 2D RGB image, then the
luminosity
mode will fail.Parameters: - mode (
{average, luminosity, channel}
, optional) –mode Greyscale Algorithm average Equal average of all channels luminosity Calculates the luminance using the CCIR 601 formula: \[Y' = 0.2989 R' + 0.5870 G' + 0.1140 B'\]channel A specific channel is chosen as the intensity value. - channel (int, optional) – The channel to be taken. Only used if mode is
channel
.
Returns: greyscale_image (
MaskedImage
) – A copy of this image in greyscale.- mode (
-
as_histogram
(keep_channels=True, bins='unique')¶ Histogram binning of the values of this image.
Parameters: - keep_channels (bool, optional) – If set to
False
, it returns a single histogram for all the channels of the image. If set toTrue
, it returns a list of histograms, one for each channel. - bins (
{unique}
, positive int or sequence of scalars, optional) – If set equal to'unique'
, the bins of the histograms are centred on the unique values of each channel. If set equal to a positive int, then this is the number of bins. If set equal to a sequence of scalars, these will be used as bins centres.
Returns: - hist (ndarray or list with
n_channels
ndarrays inside) – The histogram(s). Ifkeep_channels=False
, then hist is an ndarray. Ifkeep_channels=True
, then hist is a list withlen(hist)=n_channels
. - bin_edges (ndarray or list with n_channels ndarrays inside) – An array or a list of arrays corresponding to the above histograms that store the bins’ edges.
Raises: ValueError
– Bins can be either ‘unique’, positive int or a sequence of scalars.Examples
Visualizing the histogram when a list of array bin edges is provided:
>>> hist, bin_edges = image.as_histogram() >>> for k in range(len(hist)): >>> plt.subplot(1,len(hist),k) >>> width = 0.7 * (bin_edges[k][1] - bin_edges[k][0]) >>> centre = (bin_edges[k][:-1] + bin_edges[k][1:]) / 2 >>> plt.bar(centre, hist[k], align='center', width=width)
- keep_channels (bool, optional) – If set to
-
as_masked
(mask=None, copy=True)[source]¶ Impossible for a
BooleanImage
to be transformed to aMaskedImage
.
-
as_vector
(**kwargs)¶ Returns a flattened representation of the object as a single vector.
Returns: vector ((N,) ndarray) – The core representation of the object, flattened into a single vector. Note that this is always a view back on to the original object, but is not writable.
-
bounds_false
(boundary=0, constrain_to_bounds=True)[source]¶ Returns the minimum to maximum indices along all dimensions that the mask includes which fully surround the False mask values. In the case of a 2D Image for instance, the min and max define two corners of a rectangle bounding the False pixel values.
Parameters: - boundary (int >= 0, optional) – A number of pixels that should be added to the extent. A negative value can be used to shrink the bounds in.
- constrain_to_bounds (bool, optional) – If
True
, the bounding extent is snapped to not go beyond the edge of the image. IfFalse
, the bounds are left unchanged.
Returns: - min_b (
(D,)
ndarray) – The minimum extent of theTrue
mask region with the boundary along each dimension. Ifconstrain_to_bounds=True
, is clipped to legal image bounds. - max_b (
(D,)
ndarray) – The maximum extent of theTrue
mask region with the boundary along each dimension. Ifconstrain_to_bounds=True
, is clipped to legal image bounds.
-
bounds_true
(boundary=0, constrain_to_bounds=True)[source]¶ Returns the minimum to maximum indices along all dimensions that the mask includes which fully surround the
True
mask values. In the case of a 2D Image for instance, the min and max define two corners of a rectangle bounding the True pixel values.Parameters: - boundary (int, optional) – A number of pixels that should be added to the extent. A negative value can be used to shrink the bounds in.
- constrain_to_bounds (bool, optional) – If
True
, the bounding extent is snapped to not go beyond the edge of the image. IfFalse
, the bounds are left unchanged. - Returns –
- -------- –
- min_b (
(D,)
ndarray) – The minimum extent of theTrue
mask region with the boundary along each dimension. Ifconstrain_to_bounds=True
, is clipped to legal image bounds. - max_b (
(D,)
ndarray) – The maximum extent of theTrue
mask region with the boundary along each dimension. Ifconstrain_to_bounds=True
, is clipped to legal image bounds.
-
centre
()¶ The geometric centre of the Image - the subpixel that is in the middle.
Useful for aligning shapes and images.
Type: ( n_dims
,) ndarray
-
constrain_landmarks_to_bounds
()¶ Move landmarks that are located outside the image bounds on the bounds.
-
constrain_points_to_bounds
(points)¶ Constrains the points provided to be within the bounds of this image.
Parameters: points ( (d,)
ndarray) – Points to be snapped to the image boundaries.Returns: bounded_points ( (d,)
ndarray) – Points snapped to not stray outside the image edges.
-
constrain_to_landmarks
(group=None, batch_size=None)[source]¶ Restricts this mask to be equal to the convex hull around the landmarks chosen. This is not a per-pixel convex hull, but instead relies on a triangulated approximation. If the landmarks in question are an instance of
TriMesh
, the triangulation of the landmarks will be used in the convex hull caculation. If the landmarks are an instance ofPointCloud
, Delaunay triangulation will be used to create a triangulation.Parameters: - group (str, optional) – The key of the landmark set that should be used. If
None
, and if there is only one set of landmarks, this set will be used. - batch_size (int or
None
, optional) – This should only be considered for large images. Setting this value will cause constraining to become much slower. This size indicates how many points in the image should be checked at a time, which keeps memory usage low. IfNone
, no batching is used and all points are checked at once.
- group (str, optional) – The key of the landmark set that should be used. If
-
constrain_to_pointcloud
(pointcloud, batch_size=None, point_in_pointcloud='pwa')[source]¶ Restricts this mask to be equal to the convex hull around a pointcloud. The choice of whether a pixel is inside or outside of the pointcloud is determined by the
point_in_pointcloud
parameter. By default a Piecewise Affine transform is used to test for containment, which is useful when aligning images by their landmarks. Triangluation will be decided by Delauny - if you wish to customise it, aTriMesh
instance can be passed for thepointcloud
argument. In this case, the triangulation of the Trimesh will be used to define the retained region.For large images, a faster and pixel-accurate method can be used ( ‘convex_hull’). Here, there is no specialization for
TriMesh
instances. Alternatively, a callable can be provided to override the test. By default, the provided implementations are only valid for 2D images.Parameters: - pointcloud (
PointCloud
orTriMesh
) – The pointcloud of points that should be constrained to. See point_in_pointcloud for how in some cases aTriMesh
may be used to control triangulation. - batch_size (int or
None
, optional) – This should only be considered for large images. Setting this value will cause constraining to become much slower. This size indicates how many points in the image should be checked at a time, which keeps memory usage low. IfNone
, no batching is used and all points are checked at once. By default, this is only used for the ‘pwa’ point_in_pointcloud choice. - point_in_pointcloud ({‘pwa’, ‘convex_hull’} or callable) – The method used to check if pixels in the image fall inside the
pointcloud
or not. If ‘pwa’, Menpo’sPiecewiseAffine
transform will be used to test for containment. In this casepointcloud
should be aTriMesh
. If it isn’t, Delauny triangulation will be used to first triangulatepointcloud
into aTriMesh
before testing for containment. If a callable is passed, it should take two parameters, thePointCloud
to constrain with and the pixel locations ((d, n_dims) ndarray) to test and should return a (d, 1) boolean ndarray of whether the pixels were inside (True) or outside (False) of thePointCloud
.
Raises: ValueError
– If the image is not 2D and a default implementation is chosen.ValueError
– If the chosenpoint_in_pointcloud
is unknown.
- pointcloud (
-
copy
()¶ Generate an efficient copy of this object.
Note that Numpy arrays and other
Copyable
objects onself
will be deeply copied. Dictionaries and sets will be shallow copied, and everything else will be assigned (no copy will be made).Classes that store state other than numpy arrays and immutable types should overwrite this method to ensure all state is copied.
Returns: type(self)
– A copy of this object
-
crop
(min_indices, max_indices, constrain_to_boundary=False, return_transform=False)¶ Return a cropped copy of this image using the given minimum and maximum indices. Landmarks are correctly adjusted so they maintain their position relative to the newly cropped image.
Parameters: - min_indices (
(n_dims,)
ndarray) – The minimum index over each dimension. - max_indices (
(n_dims,)
ndarray) – The maximum index over each dimension. - constrain_to_boundary (bool, optional) – If
True
the crop will be snapped to not go beyond this images boundary. IfFalse
, anImageBoundaryError
will be raised if an attempt is made to go beyond the edge of the image. - return_transform (bool, optional) – If
True
, then theTransform
object that was used to perform the cropping is also returned.
Returns: - cropped_image (type(self)) – A new instance of self, but cropped.
- transform (
Transform
) – The transform that was used. It only applies if return_transform isTrue
.
Raises: ValueError
–min_indices
andmax_indices
both have to be of lengthn_dims
. Allmax_indices
must be greater thanmin_indices
.ImageBoundaryError
– Raised ifconstrain_to_boundary=False
, and an attempt is made to crop the image in a way that violates the image bounds.
- min_indices (
-
crop_to_landmarks
(group=None, boundary=0, constrain_to_boundary=True, return_transform=False)¶ Return a copy of this image cropped so that it is bounded around a set of landmarks with an optional
n_pixel
boundaryParameters: - group (str, optional) – The key of the landmark set that should be used. If
None
and if there is only one set of landmarks, this set will be used. - boundary (int, optional) – An extra padding to be added all around the landmarks bounds.
- constrain_to_boundary (bool, optional) – If
True
the crop will be snapped to not go beyond this images boundary. IfFalse
, an :map`ImageBoundaryError` will be raised if an attempt is made to go beyond the edge of the image. - return_transform (bool, optional) – If
True
, then theTransform
object that was used to perform the cropping is also returned.
Returns: Raises: ImageBoundaryError
– Raised ifconstrain_to_boundary=False
, and an attempt is made to crop the image in a way that violates the image bounds.- group (str, optional) – The key of the landmark set that should be used. If
-
crop_to_landmarks_proportion
(boundary_proportion, group=None, minimum=True, constrain_to_boundary=True, return_transform=False)¶ Crop this image to be bounded around a set of landmarks with a border proportional to the landmark spread or range.
Parameters: - boundary_proportion (float) – Additional padding to be added all around the landmarks bounds defined as a proportion of the landmarks range. See the minimum parameter for a definition of how the range is calculated.
- group (str, optional) – The key of the landmark set that should be used. If
None
and if there is only one set of landmarks, this set will be used. - minimum (bool, optional) – If
True
the specified proportion is relative to the minimum value of the landmarks’ per-dimension range; ifFalse
w.r.t. the maximum value of the landmarks’ per-dimension range. - constrain_to_boundary (bool, optional) – If
True
, the crop will be snapped to not go beyond this images boundary. IfFalse
, anImageBoundaryError
will be raised if an attempt is made to go beyond the edge of the image. - return_transform (bool, optional) – If
True
, then theTransform
object that was used to perform the cropping is also returned.
Returns: Raises: ImageBoundaryError
– Raised ifconstrain_to_boundary=False
, and an attempt is made to crop the image in a way that violates the image bounds.
-
crop_to_pointcloud
(pointcloud, boundary=0, constrain_to_boundary=True, return_transform=False)¶ Return a copy of this image cropped so that it is bounded around a pointcloud with an optional
n_pixel
boundary.Parameters: - pointcloud (
PointCloud
) – The pointcloud to crop around. - boundary (int, optional) – An extra padding to be added all around the landmarks bounds.
- constrain_to_boundary (bool, optional) – If
True
the crop will be snapped to not go beyond this images boundary. IfFalse
, an :map`ImageBoundaryError` will be raised if an attempt is made to go beyond the edge of the image. - return_transform (bool, optional) – If
True
, then theTransform
object that was used to perform the cropping is also returned.
Returns: Raises: ImageBoundaryError
– Raised ifconstrain_to_boundary=False
, and an attempt is made to crop the image in a way that violates the image bounds.- pointcloud (
-
crop_to_pointcloud_proportion
(pointcloud, boundary_proportion, minimum=True, constrain_to_boundary=True, return_transform=False)¶ Return a copy of this image cropped so that it is bounded around a pointcloud with an optional
n_pixel
boundary.Parameters: - boundary_proportion (float) – Additional padding to be added all around the landmarks bounds defined as a proportion of the landmarks range. See the minimum parameter for a definition of how the range is calculated.
- pointcloud (
PointCloud
) – The pointcloud to crop around. - minimum (bool, optional) – If
True
the specified proportion is relative to the minimum value of the pointclouds’ per-dimension range; ifFalse
w.r.t. the maximum value of the pointclouds’ per-dimension range. - constrain_to_boundary (bool, optional) – If
True
, the crop will be snapped to not go beyond this images boundary. IfFalse
, anImageBoundaryError
will be raised if an attempt is made to go beyond the edge of the image. - return_transform (bool, optional) – If
True
, then theTransform
object that was used to perform the cropping is also returned.
Returns: Raises: ImageBoundaryError
– Raised ifconstrain_to_boundary=False
, and an attempt is made to crop the image in a way that violates the image bounds.
-
diagonal
()¶ The diagonal size of this image
Type: float
-
extract_channels
(channels)¶ A copy of this image with only the specified channels.
Parameters: channels (int or [int]) – The channel index or list of channel indices to retain. Returns: image (type(self)) – A copy of this image with only the channels requested.
-
extract_patches
(patch_centers, patch_shape=(16, 16), sample_offsets=None, as_single_array=True)¶ Extract a set of patches from an image. Given a set of patch centers and a patch size, patches are extracted from within the image, centred on the given coordinates. Sample offsets denote a set of offsets to extract from within a patch. This is very useful if you want to extract a dense set of features around a set of landmarks and simply sample the same grid of patches around the landmarks.
If sample offsets are used, to access the offsets for each patch you need to slice the resulting list. So for 2 offsets, the first centers offset patches would be
patches[:2]
.Currently only 2D images are supported.
Parameters: - patch_centers (
PointCloud
) – The centers to extract patches around. - patch_shape (
(1, n_dims)
tuple or ndarray, optional) – The size of the patch to extract - sample_offsets (
(n_offsets, n_dims)
ndarray orNone
, optional) – The offsets to sample from within a patch. So(0, 0)
is the centre of the patch (no offset) and(1, 0)
would be sampling the patch from 1 pixel up the first axis away from the centre. IfNone
, then no offsets are applied. - as_single_array (bool, optional) – If
True
, an(n_center, n_offset, n_channels, patch_shape)
ndarray, thus a single numpy array is returned containing each patch. IfFalse
, a list ofn_center * n_offset
Image
objects is returned representing each patch.
Returns: patches (list or ndarray) – Returns the extracted patches. Returns a list if
as_single_array=True
and an ndarray ifas_single_array=False
.Raises: ValueError
– If image is not 2D- patch_centers (
-
extract_patches_around_landmarks
(group=None, patch_shape=(16, 16), sample_offsets=None, as_single_array=True)¶ Extract patches around landmarks existing on this image. Provided the group label and optionally the landmark label extract a set of patches.
See extract_patches for more information.
Currently only 2D images are supported.
Parameters: - group (str or
None
, optional) – The landmark group to use as patch centres. - patch_shape (tuple or ndarray, optional) – The size of the patch to extract
- sample_offsets (
(n_offsets, n_dims)
ndarray orNone
, optional) – The offsets to sample from within a patch. So(0, 0)
is the centre of the patch (no offset) and(1, 0)
would be sampling the patch from 1 pixel up the first axis away from the centre. IfNone
, then no offsets are applied. - as_single_array (bool, optional) – If
True
, an(n_center, n_offset, n_channels, patch_shape)
ndarray, thus a single numpy array is returned containing each patch. IfFalse
, a list ofn_center * n_offset
Image
objects is returned representing each patch.
Returns: patches (list or ndarray) – Returns the extracted patches. Returns a list if
as_single_array=True
and an ndarray ifas_single_array=False
.Raises: ValueError
– If image is not 2D- group (str or
-
from_vector
(vector, copy=True)[source]¶ Takes a flattened vector and returns a new
BooleanImage
formed by reshaping the vector to the correct dimensions. Note that this is rebuilding a boolean image itself from boolean values. The mask is in no way interpreted in performing the operation, in contrast toMaskedImage
, where only the masked region is used infrom_vector()
and :meth`as_vector`. Any image landmarks are transferred in the process.Parameters: - vector (
(n_pixels,)
bool ndarray) – A flattened vector of all the pixels of aBooleanImage
. - copy (bool, optional) – If
False
, no copy of the vector will be taken.
Returns: image (
BooleanImage
) – New BooleanImage of same shape as this imageRaises: Warning
– Ifcopy=False
cannot be honored.- vector (
-
from_vector_inplace
(vector)¶ Deprecated. Use the non-mutating API,
from_vector
.For internal usage in performance-sensitive spots, see _from_vector_inplace()
Parameters: vector ( (n_parameters,)
ndarray) – Flattened representation of this object
-
gaussian_pyramid
(n_levels=3, downscale=2, sigma=None)¶ Return the gaussian pyramid of this image. The first image of the pyramid will be the original, unmodified, image, and counts as level 1.
Parameters: - n_levels (int, optional) – Total number of levels in the pyramid, including the original unmodified image
- downscale (float, optional) – Downscale factor.
- sigma (float, optional) – Sigma for gaussian filter. Default is
downscale / 3.
which corresponds to a filter mask twice the size of the scale factor that covers more than 99% of the gaussian distribution.
Yields: image_pyramid (generator) – Generator yielding pyramid layers as
Image
objects.
-
has_nan_values
()¶ Tests if the vectorized form of the object contains
nan
values or not. This is particularly useful for objects with unknown values that have been mapped tonan
values.Returns: has_nan_values (bool) – If the vectorized object contains nan
values.
-
indices
()¶ Return the indices of all pixels in this image.
Type: ( n_dims
,n_pixels
) ndarray
-
classmethod
init_blank
(shape, fill=True, round='ceil', **kwargs)[source]¶ Returns a blank
BooleanImage
of the requested shapeParameters: - shape (tuple or list) – The shape of the image. Any floating point values are rounded
according to the
round
kwarg. - fill (bool, optional) – The mask value to be set everywhere.
- round (
{ceil, floor, round}
, optional) – Rounding function to be applied to floating point shapes.
Returns: blank_image (
BooleanImage
) – A blank mask of the requested size- shape (tuple or list) – The shape of the image. Any floating point values are rounded
according to the
-
init_from_rolled_channels
(pixels)¶ Create an Image from a set of pixels where the channels axis is on the last axis (the back). This is common in other frameworks, and therefore this method provides a convenient means of creating a menpo Image from such data. Note that a copy is always created due to the need to rearrange the data.
Parameters: pixels ( (M, N ..., Q, C)
ndarray) – Array representing the image pixels, with the last axis being channels.Returns: image ( Image
) – A new image from the given pixels, with the FIRST axis as the channels.
-
invert
()[source]¶ Returns a copy of this boolean image, which is inverted.
Returns: inverted ( BooleanImage
) – A copy of this boolean mask, where allTrue
values areFalse
and allFalse
values areTrue
.
-
mirror
(axis=1, return_transform=False)¶ Return a copy of this image, mirrored/flipped about a certain axis.
Parameters: - axis (int, optional) – The axis about which to mirror the image.
- return_transform (bool, optional) – If
True
, then theTransform
object that was used to perform the mirroring is also returned.
Returns: - mirrored_image (
type(self)
) – The mirrored image. - transform (
Transform
) – The transform that was used. It only applies if return_transform isTrue
.
Raises: ValueError
– axis cannot be negativeValueError
– axis={} but the image has {} dimensions
-
normalize_norm
(mode='all', **kwargs)¶ Returns a copy of this image normalized such that its pixel values have zero mean and its norm equals 1.
Parameters: mode ( {all, per_channel}
, optional) – Ifall
, the normalization is over all channels. Ifper_channel
, each channel individually is mean centred and normalized in variance.Returns: image ( type(self)
) – A copy of this image, normalized.
-
normalize_norm_inplace
(mode='all', **kwargs)¶ Deprecated. See the non-mutating API, normalize_norm().
-
normalize_std
(mode='all', **kwargs)¶ Returns a copy of this image normalized such that its pixel values have zero mean and unit variance.
Parameters: mode ( {all, per_channel}
, optional) – Ifall
, the normalization is over all channels. Ifper_channel
, each channel individually is mean centred and normalized in variance.
-
normalize_std_inplace
(mode='all', **kwargs)¶ Deprecated. See the non-mutating API, normalize_std().
-
pyramid
(n_levels=3, downscale=2)¶ Return a rescaled pyramid of this image. The first image of the pyramid will be the original, unmodified, image, and counts as level 1.
Parameters: - n_levels (int, optional) – Total number of levels in the pyramid, including the original unmodified image
- downscale (float, optional) – Downscale factor.
Yields: image_pyramid (generator) – Generator yielding pyramid layers as
Image
objects.
-
rescale
(scale, round='ceil', order=1, return_transform=False)¶ Return a copy of this image, rescaled by a given factor. Landmarks are rescaled appropriately.
Parameters: - scale (float or tuple of floats) – The scale factor. If a tuple, the scale to apply to each dimension. If a single float, the scale will be applied uniformly across each dimension.
- round (
{ceil, floor, round}
, optional) – Rounding function to be applied to floating point shapes. - order (int, optional) –
The order of interpolation. The order has to be in the range [0,5]
Order Interpolation 0 Nearest-neighbor 1 Bi-linear (default) 2 Bi-quadratic 3 Bi-cubic 4 Bi-quartic 5 Bi-quintic - return_transform (bool, optional) – If
True
, then theTransform
object that was used to perform the rescale is also returned.
Returns: - rescaled_image (
type(self)
) – A copy of this image, rescaled. - transform (
Transform
) – The transform that was used. It only applies if return_transform isTrue
.
Raises: ValueError
– If less scales than dimensions are provided. If any scale is less than or equal to 0.
-
rescale_landmarks_to_diagonal_range
(diagonal_range, group=None, round='ceil', order=1, return_transform=False)¶ Return a copy of this image, rescaled so that the
diagonal_range
of the bounding box containing its landmarks matches the specifieddiagonal_range
range.Parameters: - diagonal_range (
(n_dims,)
ndarray) – The diagonal_range range that we want the landmarks of the returned image to have. - group (str, optional) – The key of the landmark set that should be used. If
None
and if there is only one set of landmarks, this set will be used. - round (
{ceil, floor, round}
, optional) – Rounding function to be applied to floating point shapes. - order (int, optional) –
The order of interpolation. The order has to be in the range [0,5]
Order Interpolation 0 Nearest-neighbor 1 Bi-linear (default) 2 Bi-quadratic 3 Bi-cubic 4 Bi-quartic 5 Bi-quintic - return_transform (bool, optional) – If
True
, then theTransform
object that was used to perform the rescale is also returned.
Returns: - rescaled_image (
type(self)
) – A copy of this image, rescaled. - transform (
Transform
) – The transform that was used. It only applies if return_transform isTrue
.
- diagonal_range (
-
rescale_pixels
(minimum, maximum, per_channel=True)¶ A copy of this image with pixels linearly rescaled to fit a range.
Note that the only pixels that will considered and rescaled are those that feature in the vectorized form of this image. If you want to use this routine on all the pixels in a
MaskedImage
, consider using as_unmasked() prior to this call.Parameters: - minimum (float) – The minimal value of the rescaled pixels
- maximum (float) – The maximal value of the rescaled pixels
- per_channel (boolean, optional) – If
True
, each channel will be rescaled independently. IfFalse
, the scaling will be over all channels.
Returns: rescaled_image (
type(self)
) – A copy of this image with pixels linearly rescaled to fit in the range provided.
-
rescale_to_diagonal
(diagonal, round='ceil', return_transform=False)¶ Return a copy of this image, rescaled so that the it’s diagonal is a new size.
Parameters: - diagonal (int) – The diagonal size of the new image.
- round (
{ceil, floor, round}
, optional) – Rounding function to be applied to floating point shapes. - return_transform (bool, optional) – If
True
, then theTransform
object that was used to perform the rescale is also returned.
Returns: - rescaled_image (type(self)) – A copy of this image, rescaled.
- transform (
Transform
) – The transform that was used. It only applies if return_transform isTrue
.
-
rescale_to_pointcloud
(pointcloud, group=None, round='ceil', order=1, return_transform=False)¶ Return a copy of this image, rescaled so that the scale of a particular group of landmarks matches the scale of the passed reference pointcloud.
Parameters: - pointcloud (
PointCloud
) – The reference pointcloud to which the landmarks specified bygroup
will be scaled to match. - group (str, optional) – The key of the landmark set that should be used. If
None
, and if there is only one set of landmarks, this set will be used. - round (
{ceil, floor, round}
, optional) – Rounding function to be applied to floating point shapes. - order (int, optional) –
The order of interpolation. The order has to be in the range [0,5]
Order Interpolation 0 Nearest-neighbor 1 Bi-linear (default) 2 Bi-quadratic 3 Bi-cubic 4 Bi-quartic 5 Bi-quintic - return_transform (bool, optional) – If
True
, then theTransform
object that was used to perform the rescale is also returned.
Returns: - rescaled_image (
type(self)
) – A copy of this image, rescaled. - transform (
Transform
) – The transform that was used. It only applies if return_transform isTrue
.
- pointcloud (
-
resize
(shape, order=1, return_transform=False)¶ Return a copy of this image, resized to a particular shape. All image information (landmarks, and mask in the case of
MaskedImage
) is resized appropriately.Parameters: - shape (tuple) – The new shape to resize to.
- order (int, optional) –
The order of interpolation. The order has to be in the range [0,5]
Order Interpolation 0 Nearest-neighbor 1 Bi-linear (default) 2 Bi-quadratic 3 Bi-cubic 4 Bi-quartic 5 Bi-quintic - return_transform (bool, optional) – If
True
, then theTransform
object that was used to perform the resize is also returned.
Returns: - resized_image (
type(self)
) – A copy of this image, resized. - transform (
Transform
) – The transform that was used. It only applies if return_transform isTrue
.
Raises: ValueError
– If the number of dimensions of the new shape does not match the number of dimensions of the image.
-
rolled_channels
()¶ Returns the pixels matrix, with the channels rolled to the back axis. This may be required for interacting with external code bases that require images to have channels as the last axis, rather than the menpo convention of channels as the first axis.
Returns: rolled_channels (ndarray) – Pixels with channels as the back (last) axis.
-
rotate_ccw_about_centre
(theta, degrees=True, retain_shape=False, cval=0.0, round='round', order=1, return_transform=False)¶ Return a copy of this image, rotated counter-clockwise about its centre.
Note that the retain_shape argument defines the shape of the rotated image. If
retain_shape=True
, then the shape of the rotated image will be the same as the one of current image, so some regions will probably be cropped. Ifretain_shape=False
, then the returned image has the correct size so that the whole area of the current image is included.Parameters: - theta (float) – The angle of rotation about the centre.
- degrees (bool, optional) – If
True
, theta is interpreted in degrees. IfFalse
,theta
is interpreted as radians. - retain_shape (bool, optional) – If
True
, then the shape of the rotated image will be the same as the one of current image, so some regions will probably be cropped. IfFalse
, then the returned image has the correct size so that the whole area of the current image is included. - cval (float, optional) – The value to be set outside the rotated image boundaries.
- round (
{'ceil', 'floor', 'round'}
, optional) – Rounding function to be applied to floating point shapes. This is only used in caseretain_shape=True
. - order (int, optional) –
The order of interpolation. The order has to be in the range
[0,5]
. This is only used in caseretain_shape=True
.Order Interpolation 0 Nearest-neighbor 1 Bi-linear (default) 2 Bi-quadratic 3 Bi-cubic 4 Bi-quartic 5 Bi-quintic - return_transform (bool, optional) – If
True
, then theTransform
object that was used to perform the rotation is also returned.
Returns: - rotated_image (
type(self)
) – The rotated image. - transform (
Transform
) – The transform that was used. It only applies if return_transform isTrue
.
Raises: ValueError
– Image rotation is presently only supported on 2D images
-
sample
(points_to_sample, mode='constant', cval=False, **kwargs)[source]¶ Sample this image at the given sub-pixel accurate points. The input PointCloud should have the same number of dimensions as the image e.g. a 2D PointCloud for a 2D multi-channel image. A numpy array will be returned the has the values for every given point across each channel of the image.
Parameters: - points_to_sample (
PointCloud
) – Array of points to sample from the image. Should be (n_points, n_dims) - mode (
{constant, nearest, reflect, wrap}
, optional) – Points outside the boundaries of the input are filled according to the given mode. - cval (float, optional) – Used in conjunction with mode
constant
, the value outside the image boundaries.
Returns: sampled_pixels ((n_points, n_channels) bool ndarray) – The interpolated values taken across every channel of the image.
- points_to_sample (
-
set_patches
(patches, patch_centers, offset=None, offset_index=None)[source]¶ Set the values of a group of patches into the correct regions of this image. Given an array of patches and a set of patch centers, the patches’ values are copied in the regions of the image that are centred on the coordinates of the given centers.
The patches argument can have any of the two formats that are returned from the extract_patches() and extract_patches_around_landmarks() methods. Specifically it can be:
(n_center, n_offset, self.n_channels, patch_shape)
ndarray- list of
n_center * n_offset
Image
objects
Currently only 2D images are supported.
Parameters: - patches (ndarray or list) – The values of the patches. It can have any of the two formats that
are returned from the extract_patches() and
extract_patches_around_landmarks() methods. Specifically, it can
either be an
(n_center, n_offset, self.n_channels, patch_shape)
ndarray or a list ofn_center * n_offset
Image
objects. - patch_centers (
PointCloud
) – The centers to set the patches around. - offset (list or tuple or
(1, 2)
ndarray orNone
, optional) – The offset to apply on the patch centers within the image. IfNone
, then(0, 0)
is used. - offset_index (int or
None
, optional) – The offset index within the provided patches argument, thus the index of the second dimension from which to sample. IfNone
, then0
is used.
Raises: ValueError
– If image is not 2DValueError
– If offset does not have shape (1, 2)
-
set_patches_around_landmarks
(patches, group=None, offset=None, offset_index=None)¶ Set the values of a group of patches around the landmarks existing in this image. Given an array of patches, a group and a label, the patches’ values are copied in the regions of the image that are centred on the coordinates of corresponding landmarks.
The patches argument can have any of the two formats that are returned from the extract_patches() and extract_patches_around_landmarks() methods. Specifically it can be:
(n_center, n_offset, self.n_channels, patch_shape)
ndarray- list of
n_center * n_offset
Image
objects
Currently only 2D images are supported.
Parameters: - patches (ndarray or list) – The values of the patches. It can have any of the two formats that
are returned from the extract_patches() and
extract_patches_around_landmarks() methods. Specifically, it can
either be an
(n_center, n_offset, self.n_channels, patch_shape)
ndarray or a list ofn_center * n_offset
Image
objects. - group (str or
None
optional) – The landmark group to use as patch centres. - offset (list or tuple or
(1, 2)
ndarray orNone
, optional) – The offset to apply on the patch centers within the image. IfNone
, then(0, 0)
is used. - offset_index (int or
None
, optional) – The offset index within the provided patches argument, thus the index of the second dimension from which to sample. IfNone
, then0
is used.
Raises: ValueError
– If image is not 2DValueError
– If offset does not have shape (1, 2)
-
view_widget
(browser_style='buttons', figure_size=(10, 8), style='coloured')¶ Visualizes the image object using an interactive widget. Currently only supports the rendering of 2D images.
Parameters: - browser_style ({
'buttons'
,'slider'
}, optional) – It defines whether the selector of the images will have the form of plus/minus buttons or a slider. - figure_size ((int, int), optional) – The initial size of the rendered figure.
- style ({
'coloured'
,'minimal'
}, optional) – If'coloured'
, then the style of the widget will be coloured. Ifminimal
, then the style is simple using black and white colours.
- browser_style ({
-
warp_to_mask
(template_mask, transform, warp_landmarks=True, mode='constant', cval=False, batch_size=None, return_transform=False)[source]¶ Return a copy of this
BooleanImage
warped into a different reference space.Note that warping into a mask is slower than warping into a full image. If you don’t need a non-linear mask, consider warp_to_shape instead.
Parameters: - template_mask (
BooleanImage
) – Defines the shape of the result, and what pixels should be sampled. - transform (
Transform
) – Transform from the template space back to this image. Defines, for each pixel location on the template, which pixel location should be sampled from on this image. - warp_landmarks (bool, optional) – If
True
, result will have the same landmark dictionary as self, but with each landmark updated to the warped position. - mode (
{constant, nearest, reflect or wrap}
, optional) – Points outside the boundaries of the input are filled according to the given mode. - cval (float, optional) – Used in conjunction with mode
constant
, the value outside the image boundaries. - batch_size (int or
None
, optional) – This should only be considered for large images. Setting this value can cause warping to become much slower, particular for cached warps such as Piecewise Affine. This size indicates how many points in the image should be warped at a time, which keeps memory usage low. IfNone
, no batching is used and all points are warped at once. - return_transform (bool, optional) – This argument is for internal use only. If
True
, then theTransform
object is also returned.
Returns: - warped_image (
BooleanImage
) – A copy of this image, warped. - transform (
Transform
) – The transform that was used. It only applies if return_transform isTrue
.
- template_mask (
-
warp_to_shape
(template_shape, transform, warp_landmarks=True, mode='constant', cval=False, order=None, batch_size=None, return_transform=False)[source]¶ Return a copy of this
BooleanImage
warped into a different reference space.Note that the order keyword argument is in fact ignored, as any order other than 0 makes no sense on a binary image. The keyword argument is present only for compatibility with the
Image
warp_to_shape API.Parameters: - template_shape (
(n_dims, )
tuple or ndarray) – Defines the shape of the result, and what pixel indices should be sampled (all of them). - transform (
Transform
) – Transform from the template_shape space back to this image. Defines, for each index on template_shape, which pixel location should be sampled from on this image. - warp_landmarks (bool, optional) – If
True
, result will have the same landmark dictionary as self, but with each landmark updated to the warped position. - mode (
{constant, nearest, reflect or wrap}
, optional) – Points outside the boundaries of the input are filled according to the given mode. - cval (float, optional) – Used in conjunction with mode
constant
, the value outside the image boundaries. - batch_size (int or
None
, optional) – This should only be considered for large images. Setting this value can cause warping to become much slower, particular for cached warps such as Piecewise Affine. This size indicates how many points in the image should be warped at a time, which keeps memory usage low. IfNone
, no batching is used and all points are warped at once. - return_transform (bool, optional) – This argument is for internal use only. If
True
, then theTransform
object is also returned.
Returns: - warped_image (
BooleanImage
) – A copy of this image, warped. - transform (
Transform
) – The transform that was used. It only applies if return_transform isTrue
.
- template_shape (
-
zoom
(scale, cval=0.0, return_transform=False)¶ Return a copy of this image, zoomed about the centre point.
scale
values greater than 1.0 denote zooming in to the image and values less than 1.0 denote zooming out of the image. The size of the image will not change, if you wish to scale an image, please seerescale()
.Parameters: - scale (float) –
scale > 1.0
denotes zooming in. Thus the image will appear larger and areas at the edge of the zoom will be ‘cropped’ out.scale < 1.0
denotes zooming out. The image will be padded by the value ofcval
. - cval (
float
, optional) – The value to be set outside the rotated image boundaries. - return_transform (bool, optional) – If
True
, then theTransform
object that was used to perform the zooming is also returned.
Returns: - zoomed_image (
type(self)
) – A copy of this image, zoomed. - transform (
Transform
) – The transform that was used. It only applies if return_transform isTrue
.
- scale (float) –
-
has_landmarks
¶ Whether the object has landmarks.
Type: bool
-
has_landmarks_outside_bounds
¶ Indicates whether there are landmarks located outside the image bounds.
Type: bool
-
height
¶ The height of the image.
This is the height according to image semantics, and is thus the size of the second to last dimension.
Type: int
-
landmarks
¶ The landmarks object.
Type: LandmarkManager
-
mask
¶ Returns the pixels of the mask with no channel axis. This is what should be used to mask any k-dimensional image.
Type: (M, N, ..., L)
, bool ndarray
-
n_channels
¶ The number of channels on each pixel in the image.
Type: int
-
n_dims
¶ The number of dimensions in the image. The minimum possible
n_dims
is 2.Type: int
-
n_elements
¶ Total number of data points in the image
(prod(shape), n_channels)
Type: int
-
n_landmark_groups
¶ The number of landmark groups on this object.
Type: int
-
n_parameters
¶ The length of the vector that this object produces.
Type: int
-
n_pixels
¶ Total number of pixels in the image
(prod(shape),)
Type: int
-
shape
¶ The shape of the image (with
n_channel
values at each point).Type: tuple
-
width
¶ The width of the image.
This is the width according to image semantics, and is thus the size of the last dimension.
Type: int
- mask_data (