Join us on Discord!
You can help CodeWalrus stay online by donating here.

Display a 1024x768 pict using only 64k segments/arrays

Started by gameblabla, March 13, 2018, 09:39:29 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

gameblabla

Hello guys,
i was having some fun trying to display graphics in all kinds of different graphics mode for DOS.
So far, i've been able to try out CGA, EGA and (ofc) VGA, including the various Mode-X resolutions.

Just recently, i managed to find out how to support and write pixels for the IBM 8514. (I'll talk about it in another post)
However, i encountered some issues. The IBM 8514 is supposed to work with an IBM AT, which comes with a 80286 and that processor is 16-bits.
Making matters worse, arrays/segment cannot be bigger than 64k, even if you have enough memory.
(and even if you are using protected mode, as i found out later. Not to mention, its full of bugs)

Also, IBM never released hardware documentation for it. It only released documentation for AI, a software layer.
And that software layer is very unsuitable for pixel drawing and framebuffer access.
That adapter supports a resolution of 1024x768 with 256 colors. And the only thing i can do is to draw one pixel at a time.
The picture i want to display is that resolution and its like 768kb big.

You can see where this is going...
Here's the relevant C code for it.


typedef struct tagBITMAP              /* the structure for a bitmap. */
{
  word width;
  word height;
  byte *data;
} BITMAP;

BITMAP *bmp;

void drawquad(unsigned long col, short x, short y, unsigned short w, unsigned short h)
{
  static HRECT_DATA quad;
  quad.coord.x_coord = x;
  quad.coord.y_coord = y;
  quad.width = w;
  quad.height = h;

  hscol_data.index = (long) col;
  HSCOL(&hscol_data);    /* set supplied colour        */
  HBAR();        /* begin area            */
  HRECT(&quad);     /* draw quadrilateral        */
  HEAR(&hear_data);    /* end area            */
}

void draw_pict(BITMAP *bmp,int x,int y)
{
    unsigned short  i,j;
    for(i=0;i<bmp->height;i++)
    {
        for(j=0;j<bmp->width;j++)
        {
            drawquad(bmp->data[i+(j*bmp->width)], i, j, 1, 1);
        }
    }
}

void load_bmp(const char *file,BITMAP *b)
{
  FILE *fp;
  long index;
  word num_colors;
  int x;

  /* open the file */
  if ((fp = fopen(file,"rb")) == NULL)
  {
    printf("Error opening file %s.\n",file);
    exit(1);
  }

  /* check to see if it is a valid bitmap file */
  if (fgetc(fp)!='B' || fgetc(fp)!='M')
  {
    fclose(fp);
    printf("%s is not a bitmap file.\n",file);
    exit(1);
  }

  /* read in the width and height of the image, and the
     number of colors used; ignore the rest */
  fskip(fp,16);
  fread(&b->width, sizeof(word), 1, fp);
  fskip(fp,2);
  fread(&b->height,sizeof(word), 1, fp);
  fskip(fp,22);
  fread(&num_colors,sizeof(word), 1, fp);
  fskip(fp,6);

  /* assume we are working with an 8-bit file */
  if (num_colors==0) num_colors=256;


  /* try to allocate memory */
  if ((b->data = (byte *) malloc((word)(b->width*b->height))) == NULL)
  {
    fclose(fp);
    printf("Error allocating memory for file %s.\n",file);
    exit(1);
  }

  /* Ignore the palette information for now.
     See palette.c for code to read the palette info. */
  fskip(fp,num_colors*4);

  /* read the bitmap */
  for(index=(b->height-1)*b->width;index>=0;index-=b->width)
    for(x=0;x<b->width;x++)
      b->data[(word)index+x]=(byte)fgetc(fp);

  fclose(fp);
}


So with the 286's limitation of 64k for arrays, i am stuck.
Actually i thought of several solutions, none of which worked or are ideal :
- Separate the picture into several parts. This would work but it's a huge inconvenience.
- Call the 80286 a brain dead chip (thanks billy) and make it 32-bits only.
The problem is that AI comes with a small bit of assembly and i was unable to make it work in protected mode.
Plus, it wouldn't work on a stock IBM AT.

I looked at the only game for the 8514, Mah Jong -8514-, and it is also using AI.
I played the game and i noticed its drawings graphics like it would do with vector graphics.
So yeah, not a good example.

So what did programmers do at the time ? And if you don't know, what would you suggest ?
And no please, i don't intend to spit each picture into several parts unless you tell me a good reason why i should do that.
Also, AI only allows you to draw a pixel at best. (Using the rectangle function)
They did fix that later with XGA but that function is not backward compatible with the 8158.
  • Calculators owned: None (used to own an Nspire and TI-89)

Jarren Long

If you don't want to split up your image, you might be able to get away with using some simple compression on the bitmap, like Run-Length Encoding (RLE). If your image has scanlines that have the same color repeated over multiple pixels, RLE could reduce your image size quite a bit, which would allow you to load the whole thing in memory. Example being that, if the first scanline of the top of your bitmap is all the same color (we'll say black, 0x00 for this example), you could RLE that entire line, and store it in 8 bytes instead of 1024 (4 bytes of actual color, and 4 bytes for the number of times to repeat that color). That would look like FF00FF00FF00FF00 (read it as "256 pixels of color 0x00, repeated 4 times), instead of 1024 00's. At that point, you would just need to update your code to parse the RLE pixels and reinflate the bitmap on the fly while you render it. Though, you're just sacrificing clock cycles to spare memory by doing that.

Extra credit: instead of encoding the image beforehand, write code that will RLE the bitmap as you read it in.

That would solve the image size issue at least, so long as RLE would be appropriate for your image. If you're trying to display something like a big color gradient, your S.O.L., RLE would actually make the image larger in size.

For drawing more than one pixel at a time with the library you have, you'll need to dig into the API to see how it accesses the video buffer to write out to the graphics area of memory, and then reproduce it yourself. The hardware specs for your device will probably have a breakdown of the memory locations somewhere. And assembly will probably be required. If you're willing to dig in that deep, you can probably just read the bitmap directly into the graphics memory area, and skip the arrays all together.

Now the big question: why on earth would you want to play with a 286?!?

Yuki

As I mentioned earlier on Discord, the 286 is a 16-bit CPU and as such can't address more than 2^16 bytes (64 KiB) of memory at once, so you can't just dump the entire image to the graphics adapter all at once. However, you can change the segment of memory the CPU can see so 2^24 bytes (16 MiB) is accessible. (Older Intel CPUs had a 20-bit address bus, though, starting from the 80286 they upgraded it to 24, but you have to enable the last 4 bits with the A20 line for compatibility with older software who expected the memory space wrapping around after 1 MiB.) In DOS, the first 640 KiB of it is directly mapped to the RAM, while the rest is memory-mapped I/O and you need a memory manager software to map the rest of the RAM there. (You can also access more than 16 MiB of RAM with some sort of bank switching, but it's starting to get complicated here.) (Read more about it: https://en.wikipedia.org/wiki/DOS_memory_management)

A 1024x768x8 picture is 768 KiB, so theorically, you could fit it all into RAM, but you'd have to segment switch every 64 KiB, avoid the space that isn't mapped to RAM and use a memory manager. It's a big mess, really. And even then, unlike CGA, EGA, VGA and the like, which merely maps the screen somewhere into the memory-mapped I/O space so you can just copy your bytes on it as if it were RAM, the 8514 has a GPU you'd have to send commands to (which is likely why you need the AI layer), and I don't think you can send all your bytes at once to it either.

So I guess your best bet is probably streaming all your file reads directly to the graphics adapter, byte by byte, or chunk by chunk if you can. That's it, read a pixel, draw it and loop for every pixel. If it's a rectangle function, you might even want to use some sort of RLE as @Jarren Long here described, as in, if you encounter say 16 pixels of the same color, draw a rectangle 16 pixels long. RLE decoding this way would be extremely simple.
  • Calculators owned: TI-83+ (dead?), Casio Prizm (also dead???)
  • Consoles, mobile devices and vintage computers owned: A lot
Read Zarmina!
YUKI-CHAAAANNNN
In the beginning there was walrii. In the end there will be walrii. All hail our supreme leader :walrii: --Snektron

if you wanna throw money at me and/or CodeWalrus monthly it's here

gameblabla

Well the problem is that not even fopen can address more than 64K memory so that's making things more complicated....
I ended up splitting the images into several parts because everything else has their own drawbacks...

I thought about using RLE and decompressing on-the-fly, which can help, but my images are quite complex so it does not save a lot of space...

And the 8158 was never properly documented, at all. (no documentation on its hardware registers at all, even for XGA).
So yeah kind of a dead end. I'll release my source code in full someday once i get my small game up and running.
  • Calculators owned: None (used to own an Nspire and TI-89)

Powered by EzPortal