Syed Asadullah Hussaini

Work place: ISL Engineering College/Computer Science and Engineering/ Department,Hyderabad 500005, India

E-mail: drsyedasadullahhusaini@gmail.com

Website: https://orcid.org/0009-0001-8502-0697

Research Interests:

Biography

Syed Asadullah Hussaini, B.Tech, M.Tech, Ph.D, PDF (FMERU, FMERC), is an accomplished academician with 15 years of experience in corporate and educational sectors. He completed his B.Tech and M.Tech in Computer Science and Engineering from JNTU Hyderabad and began his career as a Software Test Engineer at VMC Arctern Pvt. Ltd. Transitioning to academia, he held roles such as Vice Principal and Director of Academics at Hi Point College of Engineering & Technology, later earning a Ph.D. from Shri JJT University, Rajasthan. Currently, he serves as an Associate Professor at ISL Engineering College, Hyderabad.Hussaini has guided 6 Ph.D. scholars and many undergoing Ph.D guidance under him, published numerous research papers, and filed patents, including “STERBAN: Ethereum Layer Two Scaling Solution” and a UK patent on environmental nanotoxicology.

Author Articles
Data Deduplication-based Efficient Cloud Optimisation Technique: Optimizing Cloud Storage through Data Deduplication

By Ranga Kavitha Mahaboob Sharief Shaik Narala Swarnalatha M. Pujitha Syed Asadullah Hussaini Samiullah Khan Shamsher Ali

DOI: https://doi.org/10.5815/ijieeb.2025.02.07, Pub. Date: 8 Apr. 2025

Effective storage management is crucial for cloud computing systems' speed and cost, given data's exponential increase. The significance of this issue has increased as the amount of data continues to increase at a disturbing pace. The act of detecting and removing duplicate data can enhance storage utilisation and system efficiency. Using less storage capacity reduces data transmission costs and enhances cloud infrastructure scalability. The use of deduplication techniques on a wide scale, on the other hand, presents a number of important obstacles. Security issues, delays in deduplication, and maintaining data integrity are all examples of difficulties that fall under this classification.  This paper introduces a revolutionary method called Data Deduplication-based Efficient Cloud Optimisation Technique (DD-ECOT). Optimising storage processes and enhancing performance in cloud-based systems is its intended goal. DD-ECOT combines advanced pattern recognition with chunking to increase storage efficiency at minimal cost. It protects data during deduplication with secure hash-based indexing. Parallel processing and scalable design decrease latency, making it adaptable enough for vast, ever-changing cloud setups.The DD-ECOT system avoids these problems through employing a secure hash-based indexing method to keep data intact and by using parallel processing to speed up deduplication without impacting system performance. Enterprise cloud storage systems, disaster recovery solutions, and large-scale data management environments are some of the usage cases for DD-ECOT. Analysis of simulations shows that the suggested solution outperforms conventional deduplication techniques in terms of storage efficiency, data retrieval speed, and overall system performance. The findings suggest that DD-ECOT has the ability to improve cloud service delivery while cutting operational costs. A simulation reveals that the proposed DD-ECOT framework outperforms existing deduplication methods. DD-ECOT boosts storage efficiency by 92.8% by reducing duplicate data. It reduces latency by 97.2% using parallel processing and sophisticated deduplication. Additionally, secure hash-based indexing methods improve data integrity to 98.1%. Optimized bandwidth usage of 95.7% makes data transfer efficient. These improvements suggest DD-ECOT may save operational costs, optimize storage, and beat current deduplication methods.

[...] Read more.
Other Articles