mem_seg_t can be shrunk as long as the resulting code change doesn't bloat too much.
At present, mem_seg_t takes 11 bytes:
struct mem_seg_s {
uint8_t seg_state;
mem_seg_t seg_next;
mem_seg_t seg_prev;
char seg_start;
char seg_end; / == seg_start + seg_size - 1 /
unsigned int seg_size;
};
Only one of seg_end and seg_size is needed, the other can be computed.
Equally, we can calculate seg_size from "seg->seg_next - &seg[1]".
Although I've not checked, given the code I suspect that seg_next is always seg_end+1, so one of these can be simply calculated from the other.
Given we must have a forward pointer, we could ditch both seg_size and seg_end, calculating them as needed. In fact, it may be that we don't need them at all.
seg_prev is useful in avoiding walking the list from the head to coalesce adjacent segments during free, or splitting free segments up if given a requested address.
The one case where seg_end != (seg_next - 1) is the last segment. We could make seg_state a bitfield, and set a MEM_SEG_LAST flag to signal that we have reached the end of the list. The last segment's seg_next pointer then points to the first byte which is not usable. For example, at present, 0xbfbf is the last usable byte, so 0xbfc0 will be the seg_next pointer, but with MEM_SEG_LAST set in the seg_state bitfield.
Eliminating seg_end and using seg_start+seg_size reduces the mem_seg code by around 80 bytes. However, there is one problem - in the pmem allocator, seg_start + seg_size will wrap to 64K, or 0x10000, which in turn wraps to 0 with the carry flag set. This overflow is unique to the pmem allocator, but by specifying the size of the page as 0x3fff rather than 0x4000, we avoid this (by wasting the final byte of the memory page). I can live with losing 4 bytes spread over 4 pages.