Is there a better algorithm for converting big binaries into decimal? At the moment, I am stuck with using single-precision float, and double-precision float. So, the maximum represent-able value for single is 1