Model elements identification using neural networks: a comprehensive study
Modeling of natural language requirements, especially for a large system, can take a significant amount of effort and time. Many automated model-driven approaches partially address this problem. However, the application of state-of-the-art neural network architectures to automated model element iden...
Saved in:
Published in: | Requirements engineering Vol. 26; no. 1; pp. 67 - 96 |
---|---|
Main Authors: | , , , , |
Format: | Journal Article |
Language: | English |
Published: |
London
Springer London
01-03-2021
Springer Nature B.V |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Modeling of natural language requirements, especially for a large system, can take a significant amount of effort and time. Many automated model-driven approaches partially address this problem. However, the application of state-of-the-art neural network architectures to automated model element identification tasks has not been studied. In this paper, we perform an empirical study on automatic model elements identification for component state transition models from use case documents. We analyzed four different neural network architectures: feed forward neural network, convolutional neural network, recurrent neural network (RNN) with long short-term memory, and RNN with gated recurrent unit (GRU), and the trade-offs among them using six use case documents. We analyzed the effect of factors such as types of splitting, types of predictions, types of designs, and types of annotations on performance of neural networks. The results of neural networks on the test and unseen data showed that RNN with GRU is the most effective neural network architecture. However, the factors that result in effective predictions of neural networks are dependent on the type of the model element. |
---|---|
ISSN: | 0947-3602 1432-010X |
DOI: | 10.1007/s00766-020-00332-2 |