Learn More
A detailed understanding of the many facets of the Internet's topological structure is critical for evaluating the performance of networking protocols, for assessing the effectiveness of proposed techniques to protect the network from nefarious intrusions and attacks, or for developing improved designs for resource provisioning. Previous studies of topology(More)
There is a large, popular, and growing literature on “scale-free networks” with the Internet along with metabolic networks representing perhaps the canonical examples. While this has in many ways reinvigorated graph theory, there is unfortunately no consistent, precise definition of scale-free graphs and few rigorous proofs of many of their claimed(More)
The search for unifying properties of complex networks is popular, challenging, and important. For modeling approaches that focus on robustness and fragility as unifying concepts, the Internet is an especially attractive case study, mainly because its applications are ubiquitous and pervasive, and widely available exposition exists at every level of detail.(More)
TCP-AQM can be interpreted as distributed primal-dual algorithms to maximize aggregate utility over source rates. We show that an equilibrium of TCP/IP, if exists, maximizes aggregate utility over both source rates and routes, provided congestion prices are used as link costs. An equilibrium exists if and only if this utility maximization problem and its(More)
Building on a recent effort that combines a first-principles approach to modeling router-level connectivity with a more pragmatic use of statistics and graph theory, we show in this paper that for the Internet, an improved understanding of its physical infrastructure is possible by viewing the physical connectivity as an annotated graph that delivers raw(More)
Although the “scale-free” literature is large and growing, it gives neither a precise definition of scale-free graphs nor rigorous proofs of many of their claimed properties. In fact, it is easily shown that the existing theory has many inherent contradictions and verifiably false claims. In this paper, we propose a new, mathematically precise, and(More)
The Internet is teeming with high variability phenomena, from measured IP flow sizes to aspects of inferred router-level connectivity, but there still exists considerable debate about how best to deal with this encountered high variability and model it. While one popular approach favors modeling highly variable event sizes with conventional, finite variance(More)
Anaerobes can obtain the entire cell's ATP by glycolysis and remove resulting reducing power by fermentation. There is a delicate balance in redox status to obtain a maximal growth of these cells, and the conditions to change redox fluxes can induce kinds of changes in metabolism. The fundamental knowledge on sensing redox status and coupling redox signals(More)
It is lamentable that Leslie Lamport’s famous quote [9] “A distributed system is one in which the failure of a computer you didn’t even know existed can render your own computer unusable” describes a scenario familiar to almost every computer user. As IT systems are increasingly distributed, it is not only the clients and servers themselves that can render(More)