Speech Recognization and Synthesis systems always need a speech signal to be segmented into some basic units like words, phonemes or syllables. These basic units are searched while implementing any segmentation or Recognization process. One of the methods for segmentation is by hand labeling speech based on linguistic interpretation of what was spoken. This is the approach taken in most acoustic-phonetic recognization system. But as this method is time consuming, error-prone and even the results are not re-producible. Therefore, a method is needed as an alternate to hand labeling which can segment the speech automatically into basic units without visual inspection of speech signal. So, here a procedure has been implemented that automatically segments the speech signals into syllable like units. At the end, the new Automatic Segmentation technique is also compared with the available techniques. Finally, the deviation between manual and automatic segmentation has been calculated for the onset and offset values for the syllable boundaries.