Maya Kabkab

We don’t have enough information about this author to calculate their statistics. If you think this is an error let us know.
Learn More
Large-scale supervised classification algorithms, especially those based on deep convolutional neural networks (DCNNs), require vast amounts of training data to achieve state-of-the-art performance. Decreasing this data requirement would significantly speed up the training process and possibly improve generalization. Motivated by this objective, we consider(More)
We introduce a new random graph model. In our model, n, n ≥ 2, vertices choose a subset of potential edges by considering the (estimated) benefits or utilities of the edges. More precisely, each vertex selects k, k ≥ 1, incident edges it wishes to set up, and an undirected edge between two vertices is present in the graph if and only if both of the end(More)
—We introduce a new random graph model. In our model, n, n ≥ 2, vertices choose a subset of potential edges by considering the (estimated) benefits or utilities of the edges. More precisely, each vertex selects k, k ≥ 1, incident edges it wishes to set up, and an edge between two vertices is present in the graph if and only if both of the end vertices(More)
  • 1