image inpainting method
Artificial-intelligence-based image inpainting method and device
WO2022080680A1
Abstract
An operation method for image inpainting comprises the steps of: sorting objects included in a first
image, on the basis of information about respective pixels of the first image; determining, from among
the objects, a first object to be corrected; setting a boundary between a second object and a third object
in an area corresponding to the first object removed from the first image; and recovering, on the basis
of pixel information about the second object and the third object, an area corresponding to the first
object, with respect to the boundary. According to the present invention, the images are analyzed on
the basis of artificial intelligence so that the images are sorted by object, and after at least one object is
removed from the sorted objects, the images are recovered so that an easier image inpainting service
can be provided to the user.
Method for high resolution image inpainting, processing system and
associated computer program product
WO2021194361A1
A computer-implemented method for high resolution image inpainting comprising the following steps:
providing a high resolution input image, providing at least one inpainting mask indicating at least one set
of pixels to be removed and synthetized, selecting at least one rectangular sub- region of the input
image and at least one aligned rectangular subregion of the inpainting mask such that the rectangular
subregion of the input image encompasses at least one set of pixels to be removed and synthetized, said
at least one sub-region of the input image and its corresponding aligned subregion of the inpainting
mask having identical minimum possible size and a position for which a calculated information gain does
not decrease, processing said sub-region of the input image and its corresponding aligned subregion of
the inpainting mask by a machine learning model in such a manner that an inpainted sub-region of the
input image along with at least one synthetized set of pixels fitting the surrounding parts of the input
image both semantically and visually is received, generating an output high resolution image comprising
said inpainted sub- region having at least one synthetized set of pixels fitting the surrounding parts of
the input image both semantically and visually.
Image inpainting method and device
US20230245283A1
The present disclosure provides an image inpainting method and an image inpainting device. The
present disclosure in at least one embodiment provides a method of inpainting an image area obscured
by an object included in an image, including selecting one or more frames among input frames of the
image, wherein each of the one or more frames is selected as a target frame including an inpainting area
obscured by the object or as a reference frame including information on the inpainting area, and
performing at least one of inter inpainting or intra inpainting depending on a number of frames selected
as the reference frame.
Generating neutral-pose transformations of self-portrait images
US11024060B1
Techniques are provided for converting a self-portrait image into a neutral-pose portrait image,
including receiving a self-portrait input image, which contains at least one person who is the subject of
the self-portrait. A nearest pose search selects a target neutral-pose image that closely matches or
approximates the pose of the upper torso region of the subject in the self-portrait input image.
Coordinate-based inpainting maps pixels from the upper torso region in the self-portrait input image to
corresponding regions in the selected target neutral-pose image to produce a coarse result image. A
neutral-pose composition refines the coarse result image by synthesizing details in the body region of
the subject (which in some cases includes the subject's head, arms, and torso), and inpainting pixels into
missing portions of the background. The refined image is composited with the original self-portrait input
image to produce a neutral-pose result image.
Depth-guided video inpainting for autonomous driving
US11282164B2
Systems and methods of video inpainting for autonomous driving are disclosed. For example, the
method stitches a multiplicity of depth frames into a 3D map, where one or more objects in the depth
frames have previously been removed. The method further projects the 3D map onto a first image
frame to generate a corresponding depth map, where the first image frame includes a target inpainting
region. For each target pixel within the target inpainting region of the first image frame, based on the
corresponding depth map, the method further maps the target pixel within the target inpainting region
of the first image frame to a candidate pixel in a second image frame. The method further determines a
candidate color to fill the target pixel. The method further performs Poisson image editing on the first
image frame to achieve color consistency at a boundary and between inside and outside of the target
inpainting region of the first image frame. For each pixel in the target inpainting region of the first image
frame, the method further traces the pixel into neighboring frames and replacing an original color of the
pixel with an average of colors sampled from the neighboring frames.
Image replacement inpainting
US20220301118A1
A method for replacing an object in an image. The method may include identifying a first object at a
position within a first image, masking, based on the first image and the position of the first object, a
target area to produce a masked image, generating, based on the masked image and an inpainting
machine learning model, a second image different from the first image, the inpainting machine learning
model being trained using a difference between the target area of training images and content of
generated images at location corresponding to the target area of the training images, generating, based
on the masked image and the second image, a third image, and adding, to the third image, a new object
different from the first object.
Low-illumination image enhancement method and system based on
image inpainting technology
CN113450290B
The invention relates to a low-illumination image enhancement method and a system based on an
image inpainting technology, wherein the image enhancement method comprises the following steps:
step 1, collecting image data and preprocessing the image data; step 2, constructing a decomposition
network model, and importing the preprocessed image data into the decomposition network model;
step 3, generating a noise map Mask; step 4, constructing a recovery network, and performing color
enhancement and detail recovery on the decomposed image data; step 5, constructing a selection
kernel enhancement module and expanding the receptive field of the image; and 6, constructing an
image repairing module, repairing the image hole and expanding effective information. The invention
effectively fuses the image repairing technology and the low-illumination image recovery, solves the
problem of detail loss caused by noise, and can repair the lost detail information while removing the
noise and further obtain better visual effect.
Training method for image inpainting model and image inpainting
method, apparatus, and device
WO2022252883A1
The present application discloses a training method for an image inpainting model, and an image
inpainting method, apparatus, and device. By means of extracting image features from a real first pixel
quality image and an artificially synthesized first pixel quality image, and by means of a real first pixel
quality image generator and a real second pixel quality image generator, a regenerated real first pixel
quality image, a pseudo-real first pixel quality image, and a reconstructed second pixel quality image are
obtained. Domain alignment loss, image generation loss and image reconstruction loss are calculated
respectively, and training is executed by using the obtained losses. An image inpainting model obtained
after training has better generalization capabilities, has relatively accurate image inpainting, and exhibits
relatively good model performance. An image feature encoder and the real second pixel quality image
generator obtained by training are used to inpaint a first pixel quality image to be inpainted, so that a
second pixel quality image having a relatively good inpainting effect can be obtained, and requirements
for image use are met.
Infrared dim target detection method based on image inpainting and
background estimation
CN112802020A
The invention discloses an infrared dim target detection method based on image patching and
background estimation, which comprises the following steps: s1, inputting an original infrared image; s2,
suppressing the background clutter of the original infrared image by a spatial domain filtering method to
obtain a filtered image; s3, segmenting suspected dim targets by using a threshold segmentation
method; s4, restoring and estimating the pixels where the suspected small targets are located by using
image repairing, and reconstructing an infrared background image; s5, subtracting the reconstructed
infrared background image from the original infrared image to obtain a target saliency map; s6, dividing
the dim target from the target saliency map by using the threshold division method again, and
outputting dim target information and the like; the method effectively solves the problems of low
detection rate and high false alarm rate caused by factors such as noise, clutter and edges in the existing
method, and the method has low algorithm complexity and less calculation time consumption, and is
easy to meet the requirement of high real-time performance in practical application.