An LSTM-Based Neural Network Architecture for Model Transformations

Model transformations are a key element in any model-driven engineering approach. But writing them is a time-consuming and error-prone activity that requires specific knowledge of the transformation language semantics. We propose to take advantage of the advances in Artificial Intelligence and, in p...

Full description

Saved in:
Bibliographic Details
Published in:2019 ACM/IEEE 22nd International Conference on Model Driven Engineering Languages and Systems (MODELS) pp. 294 - 299
Main Authors: Burgueno, Loli, Cabot, Jordi, Gerard, Sebastien
Format: Conference Proceeding
Language:English
Published: IEEE 01-09-2019
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Model transformations are a key element in any model-driven engineering approach. But writing them is a time-consuming and error-prone activity that requires specific knowledge of the transformation language semantics. We propose to take advantage of the advances in Artificial Intelligence and, in particular Long Short-Term Memory Neural Networks (LSTM), to automatically infer model transformations from sets of input-output model pairs. Once the transformation mappings have been learned, the LSTM system is able to autonomously transform new input models into their corresponding output models without the need of writing any transformation-specific code. We evaluate the correctness and performance of our approach and discuss its advantages and limitations.
DOI:10.1109/MODELS.2019.00013