• No results found

Learning Generative Image Manipulations from Language Instructions

N/A
N/A
Protected

Academic year: 2021

Share "Learning Generative Image Manipulations from Language Instructions"

Copied!
4
0
0

Loading.... (view fulltext now)

Full text

(1)

http://www.diva-portal.org

Postprint

This is the accepted version of a paper presented at Concepts in Action: Representation,

Learning, and Application (CARLA).

Citation for the original published paper:

Längkvist, M., Persson, A., Loutfi, A. (2020)

Learning Generative Image Manipulations from Language Instructions

In:

N.B. When citing this work, cite the original published paper.

Permanent link to this version:

(2)

Learning Generative Image Manipulations from

Language Instructions

Martin L¨angkvist, Andreas Persson, and Amy Loutfi

Center for Applied Autonomous Sensor Systems (AASS), ¨Orebro University, Sweden {martin.langkvist,andreas.persson,amy.loutfi}@oru.se

This paper studies whether a perceptual visual system can simulate human-like cognitive capabilities by training a computational model to predict the output of an action using language instruction. The aim is to ground action words such that an AI is able to generate an output image that outputs the effect of a certain action on an given object. Figure 1 illustrates the idea in principle where the input image contains several objects of different shape, size, and color, and an input instruction for how to manipulate one of the objects (i.e., move, remove, add, replace). The output of the model is a synthetic generated image that demonstrates the effect that the action has on the scene.1

Remove small red sphere

Fig. 1. A conceptual overview of the proposed model.

To visualize the effect of a certain action, a computational model must address a number of different sub-tasks, including; 1) image encoding, 2) language learning, 3) relational learning, and 4) image generation. In the literature, there have been works that combines some of these tasks for solving problems such as image captioning [8], image editing [2], image generation from text descriptions [5], visual question answering [6], paired robot action and linguistic translation [7], and Vision-and-Language Navigation (VLN)[1]. However, combining all the four sub-tasks, and how to learn their shared representations, is still an unaddressed challenge.

1

Proposed Model and Results

The proposed model, referred to as DCGAN+LSTM+RN, consists of a Deep Convolutional Generative Adversarial Networks (DCGAN)[4] as image encoder and decoder, a Long Short-Term Memory (LSTM) on a pre-defiend dictionary of 17 word representations as language encoder, and a Relational Network (RN)[6]

1

This work is founded by the Swedish Research Council (Vetenskapsr˚adet), grant number: 2016-05321.

(3)

2 M. L¨angkvist et al.

for learning relations between object-pairs and merging image and language representations. The model is trained in a Generative Adversarial Networks (GAN)[3] setting, which conditions both a source image and the action text description to generate a target image of the scene after the action has been performed. The model was trained on a dataset of 10000 generated image input-output pairs of 4 actions on objects of 3 types with 3 different colors and 2 sizes. The Root-Mean-Square-Error (RMSE) between generated and target images on a test set of 200 images and some visual results can be seen in Figure 2.A and Figure 2.B, respectively.

Using a language and relational model improves over the baseline. The purpose of the discriminator is to classify if the output image is real with correct action or either fake with correct action or real but with wrong action. Using a discriminator only slightly improves the results that make them look more realistic with less noise. A . R M S E

Remove Replace Add Move Overall

Model

Remove big blue

cube Replace small cube with small red

green sphere

Add big red sphere behind big blue

cube

Move big blue

pyramig left of small green

pyramid

0.0407 0.0482 0.0441 0.0519 0.0457

DCGAN (encoder only)

0.0144 0.0222 0.0281 0.0264 0.0229

DCGAN+LSTM+RN (encoder only)

0.0134 0.0208 0.0272 0.0249 0.0221 DCGAN+LSTM+RN (proposed) Generated images Input images Input sentences B . G e n e ra te d r e s u lt s (s im u la te d i m a g e s )

Fig. 2. A. RMSE between generated images and target images. B. Results on simulated images with four actions.

The pre-trained model was then tested on a sequence of five pre-processed real-world images, see Figure 3.A. The sequence of images was initially pre-processed, through a color-based segmentation approach (see Figure 3.B), before feed to the proposed model. Resulting generated images can be seen in Figure 3.D.

2

Conclusions and Future Work

This work combines an image encoder, language encoder, relational network, and image generator to ground action words, and then visualize the effect an action would have on a simulated scene. The focus in this work has been on learning meaningful shared image and text representations for relational learning and object manipulation. Directions of future work for adapting to real-world settings include: using pre-trained image and language encoders, and training on real-world images. Other directions for future work include generating sequences of images that illustrates how the action is performed and then performing the actions on a real robot.

(4)

Learning Generative Image Manipulations from Language Instructions 3 Generated output images Target images S1 S2 S3 S4 D. Generated results E n c o d e r LSTM D e c o d e r RN 𝜑(𝑋, 𝑠) Input sentence, 𝑠 Input image, 𝑋 Output image, G(𝑋, 𝑠) S e n te n c e l is t S 1 . “m o v e t h e r e d s m a ll cu b e l e ft o f re d b ig s p h e re S 2 . “r e m o v e t h e b lu e b ig p y ra m id S 3 . “r e p la ce t h e g re e n s m a ll cu b e w it h a b lu e b ig p y ra m id S 4 . “a d d a g re e n s m a ll cu b e o n t o p o f re d b ig s p h e re

A. Sequence of real-world images

S1 S2 S3 S4 1 3 2 4 5 6 7 C. Proposed model B. Pre-processing

Fig. 3. Results on real-world images with pre-trained model.

References

1. Anderson, P., Wu, Q., Teney, D., Bruce, J., Johnson, M., S¨underhauf, N., Reid, I., Gould, S., van den Hengel, A.: Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 3674–3683 (2018)

2. Chen, J., Shen, Y., Gao, J., Liu, J., Liu, X.: Language-based image editing with recurrent attentive models. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 8721–8729 (2018)

3. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N.D., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems 27, pp. 2672–2680. Curran Associates, Inc. (2014), bluehttp://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf

4. Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015)

5. Reed, S., Akata, Z., Yan, X., Logeswaran, L., Schiele, B., Lee, H.: Generative adver-sarial text to image synthesis. In: Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48. pp. 1060–1069. ICML’16, JMLR.org (2016), bluehttp://dl.acm.org/citation.cfm?id=3045390.3045503 6. Santoro, A., Raposo, D., Barrett, D.G.T., Malinowski, M., Pascanu, R., Battaglia,

P., Lillicrap, T.P.: A simple neural network module for relational reasoning. CoRR abs/1706.01427 (2017), bluehttp://arxiv.org/abs/1706.01427

7. Yamada, T., Matsunaga, H., Ogata, T.: Paired recurrent autoencoders for bidirec-tional translation between robot actions and linguistic descriptions. IEEE Robotics and Automation Letters 3(4), 3441–3448 (2018)

8. You, Q., Jin, H., Wang, Z., Fang, C., Luo, J.: Image captioning with semantic attention. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (June 2016)

References

Related documents

If there are many groups who want to program the SRAM, the group who is logged in at the computer connected to the board, should be prepared to send an image from another group to

Hence, at the same time as the image is turned around, becomes translucent or otherwise invisible, an index of an imaginary order is established, and indeed an image, behaving as

In this work we choose a top-down approach to social SMCs, which means that we first develop generative, predictive models of human activity which are then mapped to and integrated

(1) How can the target website of the childcare HDKF be restructured in terms of the arrangement of image and text and the relation of image to text to be more appealing

Unlike Eliasson, Tyrone Martinsson is having a different approach, he uses the national archive and works with text accompanied by the images, like in his project published in

När en grupp bildas där man vill bygga olika image för var och en i bandet men samtidigt ge en gemensam bild till sina lyssnare som till exempel Kiss är det viktigt att vara

Ultraljud kan även ge otillräckliga diagnostiska resultat i de fall där patienten är adipös och detta kan också vara en indikation för att genomföra en DT-venografi istället

Enligt en svensk studie är sjuksköterskor bättre än läkare på att arbeta preventivt med ohälsosamma levnadsvanor hos patienter vilket inte får fullt stöd i denna studie