The current proposal is to emulate the VIDC and IOC related registers and use the proposed GraphicsV enhancements to provide a driver that sits between the OS and physical device, to perform a blit every VSync from the logical memory (in DA2) to the current display device, accounting for geometry and visible portion specified in the VIDC1 registers and the vertical scrolling position from IOC.
I'm proposing to use a palletised 256 colour mode on the Pi to allow quick up conversion from lower bit depth modes, avoiding the need for palette conversion. Subsequently any VIDC palette changes will be mirrored to the physical device palette as they occur. This will suffice for most games, however...
At present I've now idea how to support games that change the palette on the fly, Fire & Ice being an example where the palette is changed halfway down the screen. This relies on the fact the raster is in a known position at a specific time, which wouldn't be the case unless 1. IOC timers are emulated accurately and 2. Palette changes are linked to vertical points in the virtual display and handled by the blitter, instead of passed directly to the physical device. It would also need a 16/24bit physical display and palette conversion as I'm not certain its possible to change the palette partway through on Iyonix/OMAP/Pi.
My main concern is in the CPU time required to perform the blit, I'm proposing this be done at the FPS rate of the original game, so Zarch for example might run at 12.5 FPS, Fire & Ice at 25 FPS and Pacmania at 50 FPS. We could simply drop frames if necessary, ADFFS already tracks the timing of each frame swap to regulate game speed and could decide at this point, if a blit should be done or skipped.
This leads on to another issue, some games don't use frame swaps. I've yet to figure a neat way to slow these games down on an actual Acorn, let alone what to do where blitting is required. I think we can probably presume they have a known FPS and ADFFS will at some point regulate the game speed somehow in a manor that's unrelated to video conversion.
VIDC1/20 emulation
Re: VIDC1/20 emulation
For future reference, the following games are 32bit compatible, don't touch VIDC registers and only require 4-bit MODE support to work correctly:
Jet Fighter
Pac-mania [Learning Curve version]
Terramex
Jet Fighter
Pac-mania [Learning Curve version]
Terramex
Re: VIDC1/20 emulation
I'm going to attempt to code this by hijacking GraphicsV 2, 6, 7, 8, 9, 10 and 11 where R4 AND &FF000000 == 0
GraphicsV 2
Force physical graphics hardware to 8bit equivalent MODE via:
1. Where [R0, #4] < 3 set to 3
2. Pass call to original GraphicsV owner
3. Copy the VIDC type 3 block so we can handle future VIDC register changes
GraphicsV 6
Adjust actual graphics hardware frame address via:
1. Force R1 to the GPU frame store buffer (as provided previously by GPU GraphicsV 9)
2. Pass call to original GraphicsV owner
GraphicsV 7
Allow all modes via:
1. Set R0 = R4 = 0
2. Exit
GraphicsV 8
Allow the OS to own so DA2 is used as the frame buffer via:
1. Exit leaving unclaimed
GraphicsV 9
Pass to GPU or allow OS to own, depending on caller
1. If we're calling GraphicsV 9, pass call to GPU. All other callers exit, claim call but leave R4 alone.
GraphicsV 11
Prevent changes to palette entries above the bpp of the mode
1. Set R4 = 0
2. Exit
GraphicsV 13
Prevent hardware rendering by allowing the OS to own.
1. Claim call, but leave R4
Not sure at this point if we need to trigger VSync, hopefully the original driver will continue to trigger them and if combined with a 100hz MDF we could in theory run games at their intended frame rate.
When VSync is triggered, we copy the current active screen from DA2 to the physical screen memory, shifting up bits so that each pixel ends up at 8bits. I'll bespoke code for 1, 2 and 4 bits for speed.
When palette entries are written to VIDC1 / 20 these are translated into GraphicsV 10 calls and passed to the original GraphicsV driver. All other VIDC register writes are simply stored for future reference to correct screen geometry:
VIDC palette
Translate to PaletteV 2 calls
HDSR and HDER
Update VIDC type 3 display width entry and update GPU via GraphicsV 2
VDSR and VDER
Update VIDC type 3 display height entry and update GPU via GraphicsV 2
GraphicsV 2
Force physical graphics hardware to 8bit equivalent MODE via:
1. Where [R0, #4] < 3 set to 3
2. Pass call to original GraphicsV owner
3. Copy the VIDC type 3 block so we can handle future VIDC register changes
GraphicsV 6
Adjust actual graphics hardware frame address via:
1. Force R1 to the GPU frame store buffer (as provided previously by GPU GraphicsV 9)
2. Pass call to original GraphicsV owner
GraphicsV 7
Allow all modes via:
1. Set R0 = R4 = 0
2. Exit
GraphicsV 8
Allow the OS to own so DA2 is used as the frame buffer via:
1. Exit leaving unclaimed
GraphicsV 9
Pass to GPU or allow OS to own, depending on caller
1. If we're calling GraphicsV 9, pass call to GPU. All other callers exit, claim call but leave R4 alone.
GraphicsV 11
Prevent changes to palette entries above the bpp of the mode
1. Set R4 = 0
2. Exit
GraphicsV 13
Prevent hardware rendering by allowing the OS to own.
1. Claim call, but leave R4
Not sure at this point if we need to trigger VSync, hopefully the original driver will continue to trigger them and if combined with a 100hz MDF we could in theory run games at their intended frame rate.
When VSync is triggered, we copy the current active screen from DA2 to the physical screen memory, shifting up bits so that each pixel ends up at 8bits. I'll bespoke code for 1, 2 and 4 bits for speed.
When palette entries are written to VIDC1 / 20 these are translated into GraphicsV 10 calls and passed to the original GraphicsV driver. All other VIDC register writes are simply stored for future reference to correct screen geometry:
VIDC palette
Translate to PaletteV 2 calls
HDSR and HDER
Update VIDC type 3 display width entry and update GPU via GraphicsV 2
VDSR and VDER
Update VIDC type 3 display height entry and update GPU via GraphicsV 2
Re: VIDC1/20 emulation
I've coded the core parts of this and have Zarch working, there's a lot left to do though as there are various issues to resolve:
1. OS_ReadVduVariables 149 is returning the wrong framestore. I need to reliable switch it between the GPU and DA2 framestores
2. The frame currently being blitted to the GPU framestore is out of step, so it's currently jerky...as if the frames are the wrong way round. I suspect the DA2 framestore isn't changing as I'm passing the request through to the GPU.
3. Add 1, 2 and 4 bit support
4. Test GraphicsV 7
5. Add palette change support
6. GraphicsV isn't being released when OS_Release is called, need to investigate if it's an OS bug
David, if you want to test with your 32bit translation code:
1. Grab the files from the dev site, under "/Development/32bit/DA2 support"
2. Grab the Zarch JFD file if you haven't already
3. Load !ADFFS
4. Load the ADFFS500221 module
5. Mount the Zarch JFD
To test avoiding the self-modifying code (translator not required for this):
6. Run the F1040201 obey file (NOTE it will crash after the hiscore table as there's non-32bit compliant code in it)
To test using the 32bit translation:
6. Alter the "GO 1FE00" line in F1040201-32 to initiate via the translator
7. Run F1040201-32
1. OS_ReadVduVariables 149 is returning the wrong framestore. I need to reliable switch it between the GPU and DA2 framestores
2. The frame currently being blitted to the GPU framestore is out of step, so it's currently jerky...as if the frames are the wrong way round. I suspect the DA2 framestore isn't changing as I'm passing the request through to the GPU.
3. Add 1, 2 and 4 bit support
4. Test GraphicsV 7
5. Add palette change support
6. GraphicsV isn't being released when OS_Release is called, need to investigate if it's an OS bug
David, if you want to test with your 32bit translation code:
1. Grab the files from the dev site, under "/Development/32bit/DA2 support"
2. Grab the Zarch JFD file if you haven't already
3. Load !ADFFS
4. Load the ADFFS500221 module
5. Mount the Zarch JFD
To test avoiding the self-modifying code (translator not required for this):
6. Run the F1040201 obey file (NOTE it will crash after the hiscore table as there's non-32bit compliant code in it)
To test using the 32bit translation:
6. Alter the "GO 1FE00" line in F1040201-32 to initiate via the translator
7. Run F1040201-32
Re: VIDC1/20 emulation
These issues are resolved, the module is on the dev site, same filename as before.JonAbbott wrote:1. OS_ReadVduVariables 149 is returning the wrong framestore. I need to reliable switch it between the GPU and DA2 framestores
2. The frame currently being blitted to the GPU framestore is out of step, so it's currently jerky...as if the frames are the wrong way round. I suspect the DA2 framestore isn't changing as I'm passing the request through to the GPU.
4. Test GraphicsV 7
I've also added 4bit MODE support and have been playing Jet Fighter successfully. The game may have an issue with mis-aligned reads, as the background scenery gets vertical blank lines as you move left/right a pixel at a time, but works otherwise. I've put the F1021301 Obey file in the same dev area if you want to try it.
I'd be interested to see if this works on the Iyonix, unfortunately mine is dead so I couldn't test.
Re: VIDC1/20 emulation
Here's some photos of the above in action, ADFFS is pointing the OS to DA2, which is also mapped to the RO3.1 video buffer locations and then blitting the frames every other VSync to the GPU framebuffer. In the case of Jet Fighter and Terramex, it's also converting the frame from 4-bit to 8-bit.
Re: VIDC1/20 emulation
These are now added / fixed, ADFFS500222 module on the dev site.JonAbbott wrote:5. Add palette change support
6. GraphicsV isn't being released when OS_Release is called, need to investigate if it's an OS bug
I'm at a point where I can now start coding the VIDC1 / VIDC20 register translation. Unfortunately, I've been unable to find a game that 's 32bit compatible and writes to VIDC registers so can only test with test writes. I'm half tempted to knock up an undefined abort handler to deal with MOVNV R0, R0 (it's used in all the Krisalis titles) to see if it gets any working that I can test.
I've not added 1 bpp or 2 bpp MODE support yet - I'm not even sure if any games use them, so will leave that out for the time being. If I do code them, I won't bother optimizing as they'll probably only be used for testing.
Re: VIDC1/20 emulation
1 bpp and 2bpp MODE support added. The 1 bpp MODEs don't look correct on the Pi, there are blank scanlines between each vertical line - they work okay otherwise.
MODE 7 doesn't work, I suspect I need to pass it on instead of claiming it.
MODE 7 doesn't work, I suspect I need to pass it on instead of claiming it.
Re: VIDC1/20 emulation
I added this and Pac-mania is now playable. Once side effect of not having a full 26-bit CPU interpreter is that the ghost collision detection doesn't work, so you can play indefinitely - it gets boring real quickJonAbbott wrote:I'm half tempted to knock up an undefined abort handler to deal with MOVNV R0, R0 (it's used in all the Krisalis titles) to see if it gets any working that I can test.
Re: VIDC1/20 emulation
ADFFS500223 module on the dev site.
It now blits the frames to the GPU frame buffer at the FPS of the game. When OS_Byte 113 is called, it marks the frame as dirty and blits it on the next VSync. If it doesn't see an OS_Byte 113 for 8ms, it starts blitting at 50 FPS.
I've also added pre-caching to memcopy and the 4-bit conversion code via PLD
It now blits the frames to the GPU frame buffer at the FPS of the game. When OS_Byte 113 is called, it marks the frame as dirty and blits it on the next VSync. If it doesn't see an OS_Byte 113 for 8ms, it starts blitting at 50 FPS.
I've also added pre-caching to memcopy and the 4-bit conversion code via PLD