Author Topic: Display a 1024x768 pict using only 64k segments/arrays  (Read 198 times)

0 Members and 1 Guest are viewing this topic.

Offline gameblabla

  • Super User
  • Join Date: May 2015
  • Location:
  • Posts: 754
  • Post Rating Ratio: +15/-7
  • TI-nspire porter
Display a 1024x768 pict using only 64k segments/arrays
« on: March 13, 2018, 09:39:29 am »
Hello guys,
i was having some fun trying to display graphics in all kinds of different graphics mode for DOS.
So far, i've been able to try out CGA, EGA and (ofc) VGA, including the various Mode-X resolutions.

Just recently, i managed to find out how to support and write pixels for the IBM 8514. (I'll talk about it in another post)
However, i encountered some issues. The IBM 8514 is supposed to work with an IBM AT, which comes with a 80286 and that processor is 16-bits.
Making matters worse, arrays/segment cannot be bigger than 64k, even if you have enough memory.
(and even if you are using protected mode, as i found out later. Not to mention, its full of bugs)

Also, IBM never released hardware documentation for it. It only released documentation for AI, a software layer.
And that software layer is very unsuitable for pixel drawing and framebuffer access.
That adapter supports a resolution of 1024x768 with 256 colors. And the only thing i can do is to draw one pixel at a time.
The picture i want to display is that resolution and its like 768kb big.

You can see where this is going...
Here's the relevant C code for it.

Code: [Select]
typedef struct tagBITMAP              /* the structure for a bitmap. */
  word width;
  word height;
  byte *data;
BITMAP *bmp;

void drawquad(unsigned long col, short x, short y, unsigned short w, unsigned short h)
  static HRECT_DATA quad;
  quad.coord.x_coord = x;
  quad.coord.y_coord = y;
  quad.width = w;
  quad.height = h;
  hscol_data.index = (long) col;
  HSCOL(&hscol_data);    /* set supplied colour        */
  HBAR();        /* begin area            */
  HRECT(&quad);     /* draw quadrilateral        */
  HEAR(&hear_data);    /* end area            */

void draw_pict(BITMAP *bmp,int x,int y)
    unsigned short  i,j;
            drawquad(bmp->data[i+(j*bmp->width)], i, j, 1, 1);

void load_bmp(const char *file,BITMAP *b)
  FILE *fp;
  long index;
  word num_colors;
  int x;
  /* open the file */
  if ((fp = fopen(file,"rb")) == NULL)
    printf("Error opening file %s.\n",file);
  /* check to see if it is a valid bitmap file */
  if (fgetc(fp)!='B' || fgetc(fp)!='M')
    printf("%s is not a bitmap file.\n",file);
  /* read in the width and height of the image, and the
     number of colors used; ignore the rest */
  fread(&b->width, sizeof(word), 1, fp);
  fread(&b->height,sizeof(word), 1, fp);
  fread(&num_colors,sizeof(word), 1, fp);
  /* assume we are working with an 8-bit file */
  if (num_colors==0) num_colors=256;
  /* try to allocate memory */
  if ((b->data = (byte *) malloc((word)(b->width*b->height))) == NULL)
    printf("Error allocating memory for file %s.\n",file);
  /* Ignore the palette information for now.
     See palette.c for code to read the palette info. */
  /* read the bitmap */

So with the 286's limitation of 64k for arrays, i am stuck.
Actually i thought of several solutions, none of which worked or are ideal :
- Separate the picture into several parts. This would work but it's a huge inconvenience.
- Call the 80286 a brain dead chip (thanks billy) and make it 32-bits only.
The problem is that AI comes with a small bit of assembly and i was unable to make it work in protected mode.
Plus, it wouldn't work on a stock IBM AT.

I looked at the only game for the 8514, Mah Jong -8514-, and it is also using AI.
I played the game and i noticed its drawings graphics like it would do with vector graphics.
So yeah, not a good example.

So what did programmers do at the time ? And if you don't know, what would you suggest ?
And no please, i don't intend to spit each picture into several parts unless you tell me a good reason why i should do that.
Also, AI only allows you to draw a pixel at best. (Using the rectangle function)
They did fix that later with XGA but that function is not backward compatible with the 8158.
« Last Edit: March 13, 2018, 09:41:48 am by gameblabla »

  • Calculators owned: TI Nspire CX, TI-89

Offline Jarren Long

  • New User
  • Join Date: Dec 2014
  • Location:
  • Posts: 17
  • Post Rating Ratio: +0/-0
If you don't want to split up your image, you might be able to get away with using some simple compression on the bitmap, like Run-Length Encoding (RLE). If your image has scanlines that have the same color repeated over multiple pixels, RLE could reduce your image size quite a bit, which would allow you to load the whole thing in memory. Example being that, if the first scanline of the top of your bitmap is all the same color (we'll say black, 0x00 for this example), you could RLE that entire line, and store it in 8 bytes instead of 1024 (4 bytes of actual color, and 4 bytes for the number of times to repeat that color). That would look like FF00FF00FF00FF00 (read it as "256 pixels of color 0x00, repeated 4 times), instead of 1024 00's. At that point, you would just need to update your code to parse the RLE pixels and reinflate the bitmap on the fly while you render it. Though, you're just sacrificing clock cycles to spare memory by doing that.

Extra credit: instead of encoding the image beforehand, write code that will RLE the bitmap as you read it in.

That would solve the image size issue at least, so long as RLE would be appropriate for your image. If you're trying to display something like a big color gradient, your S.O.L., RLE would actually make the image larger in size.

For drawing more than one pixel at a time with the library you have, you'll need to dig into the API to see how it accesses the video buffer to write out to the graphics area of memory, and then reproduce it yourself. The hardware specs for your device will probably have a breakdown of the memory locations somewhere. And assembly will probably be required. If you're willing to dig in that deep, you can probably just read the bitmap directly into the graphics memory area, and skip the arrays all together.

Now the big question: why on earth would you want to play with a 286?!?

Offline Juju

  • aka Yuki Kagayaki aka J̵̭͕͇ù̞̭̝̯̦j̴̭̙̗͖͡ù͏͓̲̕
  • CodeWalrus Staff
  • Super User
  • Server Maintenance
  • Moderator
  • Forum Maintenance
  • Original 5
  • CodeWalrus Supporter
  • *
  • Join Date: Nov 2014
  • Location: Inside a walrus
  • Posts: 3185
  • Post Rating Ratio: +36/-2
  • Couch potato
    • jul.savard
    • juju2143
    • @juju2143
    • juju2143
    • @julosoft
    • juju-kun
    • /u/juju2143
    • juju2143
    • @juju2143
    • Juju's shed
  • Gender: Female
  • WalriiPoints: 99999
As I mentioned earlier on Discord, the 286 is a 16-bit CPU and as such can't address more than 2^16 bytes (64 KiB) of memory at once, so you can't just dump the entire image to the graphics adapter all at once. However, you can change the segment of memory the CPU can see so 2^24 bytes (16 MiB) is accessible. (Older Intel CPUs had a 20-bit address bus, though, starting from the 80286 they upgraded it to 24, but you have to enable the last 4 bits with the A20 line for compatibility with older software who expected the memory space wrapping around after 1 MiB.) In DOS, the first 640 KiB of it is directly mapped to the RAM, while the rest is memory-mapped I/O and you need a memory manager software to map the rest of the RAM there. (You can also access more than 16 MiB of RAM with some sort of bank switching, but it's starting to get complicated here.) (Read more about it:

A 1024x768x8 picture is 768 KiB, so theorically, you could fit it all into RAM, but you'd have to segment switch every 64 KiB, avoid the space that isn't mapped to RAM and use a memory manager. It's a big mess, really. And even then, unlike CGA, EGA, VGA and the like, which merely maps the screen somewhere into the memory-mapped I/O space so you can just copy your bytes on it as if it were RAM, the 8514 has a GPU you'd have to send commands to (which is likely why you need the AI layer), and I don't think you can send all your bytes at once to it either.

So I guess your best bet is probably streaming all your file reads directly to the graphics adapter, byte by byte, or chunk by chunk if you can. That's it, read a pixel, draw it and loop for every pixel. If it's a rectangle function, you might even want to use some sort of RLE as @Jarren Long here described, as in, if you encounter say 16 pixels of the same color, draw a rectangle 16 pixels long. RLE decoding this way would be extremely simple.
« Last Edit: March 15, 2018, 09:27:00 am by Juju »
  • Calculators owned: TI-83+ (dead?), Casio Prizm (also dead???)
  • Consoles, mobile devices and vintage computers owned: A lot
On semi-hiatus until who knows when. CODEWALRUS 2.0 COMING SOON
In the beginning there was walrii. In the end there will be walrii. All hail our supreme leader :walrii: --Snektron

if you wanna throw money at me and/or CodeWalrus monthly it's here


You can also use the following HTML or bulletin board code to share it on your page or forum signature!

Also do not forget to check our affiliates below.
Planet Casio TI-Planet BroniesQC BosaikNet Velocity Games