Knowledge graphs provide a useful semantic and informational layer for various applications by representing facts through a network of relationships between entities. Given the very nature of knowledge, this type of graph does not contain all the facts of interest, which can be detrimental to the good performance of its applications. Given this scenario, the development of strategies for its completion has become recurrent, in other words, for inferring the truth value of relationships not observed in the graph. Above all, the use of methods based on representation learning has become the main line of research and development. Typically, the vast majority of these methods assume that the set of entities is static. Consequently, as the graph evolves, it is necessary to obtain updated vector representations for the entities, which, in addition to being computationally expensive, casts doubt on the applicability of this type of technique. In view of the above, this thesis investigates the problem of out-of-sample completion in Knowledge Graphs, which removes the restriction that the set of entities at the time of inference be the same as the one observed during model adjustment. In greater detail, a methodology based on representation learning and artificial neural networks for this problem is developed and empirically evaluated. In it, the inference process is carried out based on vector representations obtained by a coding network from a current neighborhood of the fact to be inferred. The definition of this neighborhood, called the query context, is studied, as well as strategies that allow scaling the proposed methodology. Experimental results indicate that the proposed methodology is competitive with the state of the art.