Surasak Sanguanpong

Learn More
With the speed and bandwidth offered by the next generation Internet technology, there is a need for large and scalable Internet server that can provides an adequate computing power and storage for the new generation Internet applications. This requires a huge investment in a very large and expensive commercial server system. Recently, the emergence of(More)
A common problem of large scale search engines and web spiders is how to handle a huge number of encountered URLs. Traditional search engines and web spiders use hard disk to store URLs without any compression. This results in slow performance and more space requirement. This paper describes a simple URL compression algorithm allowing efficient compression(More)
In a very high-speed network environment such as gigabit Ethernet network, firewalls that have to inspect and filter all flowing packets are reaching their limits. A firewall running on a single machine is potential bottleneck and cannot scale over certain thresholds, even if it has particular hardware built-in. Hence, parallel system appears as an(More)
Search engines primary rely on web spiders to collect large amount of data for indexing and analysis. Data collection can be performed by several agents of web spiders running in parallel or distributed manner over a cluster of workstations. This parallelization is often necessary in order to cope with a large number of pages in a reasonable amount of time.(More)
Conventional high-availability stateful parallel firewall suffers from low scalability due to two overlapping requirements: workload distribution and redundancy. To achieve high throughput, load-distribution with complex algorithm is conventionally employed, consuming a lot of resources and making the system susceptible to state-related attacks such as(More)
Managing high workload and concurrent accesses are challenging tasks for captive portal. The large number of clients generally creates high workload to the system. Furthermore, some worm or Trojan infected clients create a lot more traffic by spreading themselves through the network via HTTP protocol. Such stateful traffic typically leads to network attack,(More)