TerdRatchett
- 5th May 2009, 20:35
I am writing a custom graphing function for a GLCD display. I'm having some difficulty in the math to calculate which dot gets shown for a given input. My math leaves me with a decimal value, which are of little use without floating point math.
For example, lets say we have a scale that goes from 0-4800 and the Y is 8 bits x 6 pages, or 48 bits high. So each pixel is worth 100. If we receive data that is 2400, then we would want to turn on the 24th pixel. By dividing the input data by the total worth of a page (800 which is 8 pixels) I know which page my dot is going to be on. trouble is when I try to come up with a way to determine which pixel of that byte will be displayed, I get a decimal value. Any ideas?
TIA,
TR
For example, lets say we have a scale that goes from 0-4800 and the Y is 8 bits x 6 pages, or 48 bits high. So each pixel is worth 100. If we receive data that is 2400, then we would want to turn on the 24th pixel. By dividing the input data by the total worth of a page (800 which is 8 pixels) I know which page my dot is going to be on. trouble is when I try to come up with a way to determine which pixel of that byte will be displayed, I get a decimal value. Any ideas?
TIA,
TR