View unanswered posts | View active topics It is currently Mon Sep 23, 2019 8:45 pm



Reply to topic  [ 203 posts ]  Go to page Previous  1, 2, 3, 4, 5 ... 14  Next
 74xx based CPU (yet another) 
Author Message
User avatar

Joined: Fri Mar 22, 2019 8:03 am
Posts: 168
Location: Girona-Catalonia
monsonite wrote:
Hi Joan,

If you have not read the "One Page Computing" thread - there is a convenient link here https://revaldinho.github.io/opc/
...
There is plenty of rich information here.

Ken

Thanks for that. It was indeed an interesting read. Unfortunately it is all left on the academic/conceptual level as far as I can understand. It's interesting the use of "predicated execution" like some ARM processors. The more advanced instruction sets do not seem to implement explicit jumps, so I suppose they may be performed by directly writing to the PC though, which possibly complicates any hardware pipelining implementation. Also I can spot a couple of missing features that prevent them to be used as the output of a complete compiler, such as unsigned arithmetic or type promotion. But the absolutely regular bit patters are likeable, although this goes in detriment of compactness because in most cases an additional operand is required. At the end of the day, the choice of an instruction set has pros and cons depending on your objectives. I tried to create a set that is both compact and complete, with and as many regular patterns as possible. This is by no means definitive as I may find ways to improve based on suggestions or new experience.

Joan


Mon Mar 25, 2019 4:56 pm
Profile

Joined: Wed Jan 09, 2013 6:54 pm
Posts: 1258
I do like the idea of being simple and regular, or at least starting that way, as it makes for more progress in the early stages. It might even be a sufficient machine!

The OPC adventures have been very interesting. All the machines are practically usable, so if you think something is missing, maybe that would make an interesting conversation. We have BCPL and a C like compiler for at least one of them. (Edit: and the machines have run, in emulation, on FPGA, and as embedded emulations.)

But the various OPC-related threads are not all in one place. Here are some, maybe all:


Mon Mar 25, 2019 5:39 pm
Profile
User avatar

Joined: Fri Mar 22, 2019 8:03 am
Posts: 168
Location: Girona-Catalonia
BigEd wrote:
I do like the idea of being simple and regular, or at least starting that way, as it makes for more progress in the early stages. It might even be a sufficient machine!

The OPC adventures have been very interesting. All the machines are practically usable, so if you think something is missing, maybe that would make an interesting conversation. We have BCPL and a C like compiler for at least one of them. (Edit: and the machines have run, in emulation, on FPGA, and as embedded emulations.)

But the various OPC-related threads are not all in one place. Here are some, maybe all:


That's all very interesting!.

I'm not saying that they can't be run for real, but I think there are some missing instructions that can make implementing some algorithms difficult. One case I was considering is where you would want to access bytes from memory, and perform operations with them. The case is that as far as I understand these machines are exclusively 16 bit, and do not have 8 bit instructions.

For example let's suppose we have a stream of bytes and we want to calculate a signed checksum of them. I mean perform the signed sum of all the bytes. I think this could be tricky with the existing instruction set. It can still be done by reading the stream in word chunks, extracting bytes from words by performing shifting, logical operations and swaps, then perform sign-extension with more logical operations and comparisons, to finally compute the checksum as words, before truncating the result again to a byte. I suppose that all production processors have byte instructions for a reason. Implementing them on the OPC processor would mean almost duplicating the existing set, which would require an additional bit for instruction encoding, which in turn should have to be taken from somewhere, maybe removing or simplifying the predicate feature, or departing from the regular bit pattern design.

Another issue (or oddity in this case) is the lack of an overflow flag, and the inability to combine flags on instruction predicates. I think this can make certain type comparisons trickier. Signed comparisons take into account the overflow flag. I suppose this is partially overcome by the existence of two different compare (CMP and CMPC) instructions which I assume that act on flags in different ways, but that's different than what's normally used in production processors.

The project is truly impressive, however, specially considering all the tools that have been developed around it, which I just discovered, and the instruction set makes the architecture really easy to implement. I will definitely keep an eye on it as I advance into more aspects of my build, so thanks for sharing all these links.

Joan


Mon Mar 25, 2019 10:01 pm
Profile

Joined: Wed Jan 09, 2013 6:54 pm
Posts: 1258
Nice to hear you appreciate some of the finer points! Yes, indeed, we chose to make a word-addressed machine, and so byte operations take a bit of juggling, just as nibbles would take juggling in a conventional machine. There are lots of things to be said about that, but the main point probably is that we were working within a one-page constraint and I think you're aiming to make a machine that's a more regular target for a compiler, and are not constrained to one page. So, different priorities, different facilities.


Mon Mar 25, 2019 10:09 pm
Profile
User avatar

Joined: Fri Mar 22, 2019 8:03 am
Posts: 168
Location: Girona-Catalonia
BigEd wrote:
Nice to hear you appreciate some of the finer points! Yes, indeed, we chose to make a word-addressed machine, and so byte operations take a bit of juggling, just as nibbles would take juggling in a conventional machine. There are lots of things to be said about that, but the main point probably is that we were working within a one-page constraint and I think you're aiming to make a machine that's a more regular target for a compiler, and are not constrained to one page. So, different priorities, different facilities.

Yes, THAT's exactly the point. I am currently struggling (for lack of a better word) with a standard C/C++ compiler implementation. And by standard I mean standard. The front end is the Clang compiler with all its bells and whistles, including astonishing target independent code optimisation algorithms. So I am only configuring a backend for my particular architecture. In fact, another point of the chosen architecture is making things easier for the backend implementation, so if I need to add a new instruction just to make a particular code pattern easier, and that does not complicate things in hardware, then I'll just add such instruction.


Mon Mar 25, 2019 10:18 pm
Profile

Joined: Wed Jan 09, 2013 6:54 pm
Posts: 1258
You remind me of Jan Gray's description of his project progress: first configure a C compiler to do what you want, then make the machine that it targets!


Mon Mar 25, 2019 10:53 pm
Profile
User avatar

Joined: Fri Mar 22, 2019 8:03 am
Posts: 168
Location: Girona-Catalonia
Yes, the plan is first to define the ISA and have a minimum set of tools ready to use before starting with hardware.

I have been working further on the LLVM compiler backend and have studied in some detail what it produces for other 16 bit width based instruction sets. Namely the MSP430, the AVR, and the ARM-Thumb. I had also sporadic looks at what the compiler does for the X68. It is not a reference for my goals, but it's interesting to learn the way the compiler uses the instruction set to improve code density and gain performance, as it offers a few optimisation tricks that are generally unavailable on RISC oriented processors

I have compared what the compiler produces for each architecture from identical pieces of C code, with the still-unrefined version of my backend (I call it the CPU 74) . I have not branches and other details running yet, but I am able to compile linear code including function calls, so I can look at function argument and variable accesses on the stack frame, as well as management of types: So far only signed and unsigned char (1 byte long) and signed and unsigned short int (2 bytes long).

I verified that there's some sort of tradeoff between the load-store architecture, the number of available registers and the calling convention. For example, you can instruct the compiler to use a combination of registers and values pushed onto the stack for function argument transfers and value returns. You can also define a register subset as callee-saved registers that should keep the same value after returning from the subroutine. In case there's not enough number of available registers you must implement code to emit instructions that store values on the stack and load them back into registers before the subroutine ends.

The more the number of registers, the less the chances of having to use the stack for argument transfer or intermediate value storage, which may increase performance due to fewer memory accesses. However, we can not have any arbitrary number of registers because they use hardware resources, and they require expanded fields in the instruction set to accommodate their encodings. On the other hand, the faster the can get access to values stored in the stack the better.

So with the above into consideration, I played a bit with the compiler and found that the most carefully balanced architecture in all respects seems to me to be the ARM-Thumb.

The MSP430 appears to rely on a significant number of registers, 16 registers, but indexed stack access is comparatively more expensive because it's a two word instruction, the second word contains the indexed offset field. The instruction set also enables extensive memory access for all ALU instructions, albeit expensive, thanks to totally orthogonal addressing modes, which in my opinion defeats the need for such many registers. The practical result is that most registers are hardly used at all in normal compiler conditions. The instruction set orthogonality, although definitely nice, puts a limit on the number of available instruction encoding slots and forces all addressing modes requiring offsets to use the next instruction word for that. To me this results in a rather unbalanced choice of instructions with many expensive instructions that can't be really compensated with the big number of registers.

The ARM-thumb on the contrary, uses 8 general purpose registers (about half), and it is designed around a load-store architecture. These two things help to free encoding slots and this allows several instructions to feature constant fields or memory offsets embedded in the same (one word) instruction. For example, there are single word stack load-store instructions with the offset field enclosed in the same, and load of small constants in registers is also possible without double word encodings. Therefore, stack access is presumable faster than the MSP430, while increasing code density, and the also rather common need to load a small constant is also cheap. The comparatively small number of registers does not seem to be a big deal in normal circumstances.

The AVR is not really comparable here because being a 8 bit processor it takes twice the number of instructions to produce a 16 bit result, and in reality I have only incorporated it in a limited number of tests.

So all the above is just to say that I decided again to replace and try a different instruction set. This will be my third attempt. My second (current) iteration was a load-store set, already featuring small constant fields embedded in some instructions, but requiring a second word for the offset of indexed indirect memory access. It also had 16 registers including the PC and SP. Now I plan to reduce the number of registers to 8 general purpose ones and treat SP and PC in a separate way with specific instructions. This should help to free encodings for adding more embedded fields, specially for the indexed memory addressing, and presumably obtain a more overall balanced set where all the features are used by the compiler. So more on that as I get into something.

Joan.


Thu Mar 28, 2019 10:31 am
Profile
User avatar

Joined: Fri Mar 22, 2019 8:03 am
Posts: 168
Location: Girona-Catalonia
Other than the above (previous post), I have been struggling with a bug on the LLVM compiler backend tools that has driven me nuts for almost 2 days. As a matter of information I think it's useful to expose what happened.

One of the aspects of the backend creation is the definition of the "Instruction Formats" and the "Instruction Info". This is made with a set files written on a meta language that is latter processed into actual c++ code for compiling into the rest of the backend infrastructure.

An excerpt of the Instruction Formats file looks like this:

Code:
// Type 1 generic instruction format

class Type1< bits<4> opbase, bits<4> opcode, dag outs, dag ins, string asmstr, list<dag> pattern>
      : Instruction16< outs, ins, asmstr, pattern>
{
  bits<4> rd;   // unbound var
  bits<4> rs;   // unbound var
  let Inst{15-12} = opbase;
  let Inst{11-8} = opcode;
  let Inst{7-4} = rs;
  let Inst{3-0} = rd;
}

class Type1K< bits<4> opbase, bits<4> opcode, dag outs, dag ins, string asmstr, list<dag> pattern>
      : Instruction32< outs, ins, asmstr, pattern >
{
  bits<4> rd;   // unbound var
  bits<4> rs;   // unbound var
  let Inst{15-12} = opbase;
  let Inst{11-8} = opcode;
  let Inst{7-4} = rs;
  let Inst{3-0} = rd;
  bits<16> K;     // unbound
  let Inst{31-16} = K;
}


Similarly, the Instruction Info file may look like this

Code:
class T1ir16mov<string opcStr, bits<4> opbase, bits<4> opcode>: Type1
                <opbase, opcode,
                (outs GR16:$rd), (ins memRegInd:$rs),
                !strconcat(opcStr, ".w\t($rs), $rd"),
                [(set GR16:$rd, (i16 (load memRegInd:$rs)))]>
                {let canFoldAsLoad = 1; let isReMaterializable = 1;}

class T1ir8mov<string opcStr, bits<4> opbase, bits<4> opcode>: Type1
                <opbase, opcode,
                (outs GR8:$rd), (ins memRegInd:$rs),
                !strconcat(opcStr, ".b\t($rs), $rd"),
                [(set GR8:$rd, (i8 (load memRegInd:$rs)))]>
                {let isReMaterializable = 1;}

class T1ri16mov<string opcStr, bits<4> opbase, bits<4> opcode>: Type1
                <opbase, opcode,
                (outs), (ins memRegInd:$rd, GR16:$rs),
                !strconcat(opcStr, ".w\t$rs, ($rd)"),
                [(store GR16:$rs, memRegInd:$rd)]>;

class T1ri8mov<string opcStr, bits<4> opbase, bits<4> opcode>: Type1
                <opbase, opcode,
                (outs), (ins memRegInd:$rd, GR8:$rs),
                !strconcat(opcStr, ".b\t$rs, ($rd)"),
                [(store GR8:$rs, memRegInd:$rd)]>;

class T1mr16mov<string opcStr, bits<4> opbase, bits<4> opcode>: Type1K
                <opbase, opcode,
                (outs GR16:$dst), (ins memRegIdx:$src),     //Workaround. A bug prevents LLVC to properly match instruction format names.
                !strconcat(opcStr, ".w\t$src, $dst"),
                [(set GR16:$dst, (load memRegIdx:$src))]>;
                {let canFoldAsLoad = 1; let isReMaterializable = 1;}

class T1mr8mov<string opcStr, bits<4> opbase, bits<4> opcode>: Type1K
                <opbase, opcode,
                (outs GR8:$dst), (ins memRegIdx:$src),
                !strconcat(opcStr, ".b\t$src, $dst"),
                [(set GR8:$dst, (i8 (load memRegIdx:$src)))]>;
                {let canFoldAsLoad = 1; let isReMaterializable = 1;}

class T1rm16mov<string opcStr, bits<4> opbase, bits<4> opcode>: Type1K
                <opbase, opcode,
                (outs), (ins memRegIdx:$dst, GR16:$rs),
                !strconcat(opcStr, ".w\t$rs, $dst"),
                [(store GR16:$rs, memRegIdx:$dst)]>;

class T1rm8mov<string opcStr, bits<4> opbase, bits<4> opcode>: Type1K
                <opbase, opcode,
                (outs), (ins memRegIdx:$dst, GR8:$rs),
                !strconcat(opcStr, ".b\t$rs, $dst"),
                [(store GR8:$rs, memRegIdx:$dst)]>;

def MOVir16 : T1ir16mov< "movir", 1, 0b1000 >;
def MOVir8 : T1ir8mov < "movir", 1, 0b1001 >;
def MOVri16 : T1ri16mov< "movri", 1, 0b1010 >;
def MOVri8 : T1ri8mov < "movri", 1, 0b1011 >;
def MOVmr16 : T1mr16mov< "movmr", 1, 0b1100 >;
def MOVmr8 : T1mr8mov < "movmr", 1, 0b1101 >;
def MOVrm16 : T1rm16mov< "movrm", 1, 0b1110 >;
def MOVrm8 : T1rm8mov < "movrm", 1, 0b1111 >;


It's in really a bit more complicated because you also need to go into instruction operand definitions, addressing modes, instruction constraints, explicit execution patterns, and then create a lot of custom c++ code for the many situations that the system can't handle automatically. But or those who may not be familiarised with the LLVM, you get the idea. The worse thing as I stated in one of my initial postings is the absolute lack of documentation that would cover any detail beyond the basics.

So the problem was, that the system is supposed to match variable names in the Instruction Info file, such as $rd or $rs above, with unbound variable names in the Instruction Format file. If the same names exist, then variable values are bound together between both files. Mismatched variable names are just bound in declaration order for every Instruction Format and Instruction Info pair. This is so poorly documented that I had to figure it all by trial and error and by looking at the LLVM source code. So, anyway, the variable binding algorithm in the system works most of the time, but for some reason it failed for the "T1mr16mov" instruction class. I have spent literally hours trying to figure out what was wrong with my instruction definitions to not avail. The thing just compiled nicely but the resulting backend produced wrong assembly code whatever I tried. By looking at the source code through the debugger (no less than hundreds of thousands of c++ lines of code), I finally figured out that all my problems were caused by a bug on the binding algorithm that messed with matching names.

The workaround I found was to avoid variable name matching on the offending case, and just let it produce bindings based on declaration order. Not nice, and makes the code more confusing, but it seems to work so far, at least on the areas that are already implemented. The existing working backends (ARM, X86, Spark, MIPS and so on) do not seem to get affected by this issue because they rely a lot more on custom c++ code and use bindings based on declaration order, rather than variable name matching, so maybe their developers were aware of the bug? Or name matching was not available on earlier LLVM versions?

Anyway, I can't wait to have the compiler working and start with the assembler and simulator which I should have a lot more control about. Several months will pass before that...

Joan.


Last edited by joanlluch on Thu Mar 28, 2019 11:56 am, edited 1 time in total.



Thu Mar 28, 2019 11:15 am
Profile

Joined: Wed Apr 24, 2013 9:40 pm
Posts: 177
Location: Huntsville, AL
Thanks for the update. I think that the approach you're taking here will pay dividends.

_________________
Michael A.


Thu Mar 28, 2019 11:37 am
Profile

Joined: Wed Apr 24, 2013 9:40 pm
Posts: 177
Location: Huntsville, AL
Perhaps you've already tried switching the order of the definitions of these two instruction types.

One thing to keep in mind is the order in which the "parse tree" is generated. I have run into "parse tree" definition order issues recently and in the past. It appears that even automatic "parse tree" generators sometimes need a bit of help from the humans in order to get the order of evaluation correct. In general, I've been cautioned to organize my parse strings from the more complicated to the more general. I frequently get this wrong, and the result is a mis-match. If the target instruction set is particularly general, there will probably be a silent match to a different instruction.

I am currently working on reordering my "parse tree" for the M65C02A assembler. A recent issue in the order of the parse strings resulted in the assembler putting out a valid but incorrect instruction. A similar issue occurred with a front-end compiler that I'm working on that targets LLVM and uses the x86 backend to produce executables. It's not exactly the same problem that you're describing, but I think that it's related.

_________________
Michael A.


Thu Mar 28, 2019 11:52 am
Profile
User avatar

Joined: Fri Mar 22, 2019 8:03 am
Posts: 168
Location: Girona-Catalonia
Hi Michael,

I understand the issue you describe, but in my case it was something a bit different. Let me try to explain:

CASE 1: If you look at the code I posted above for the definition of "class T1mr16mov", you'll see that's an instruction to LOAD memory to a register. It takes tow operands, the register destination $dst and the memory source $src. The memory source is in turn defined as a "memRegIdx" type operand (not shown) which is defined as a Register+Offset. So there are in total three (3) operands for the instruction: The destination register, the source register base, and the offset. This should match with the 3 unbound variables on the class Type1K as shown in the first code snippet. These refer to specific bit field locations where the variable values should be written to create the instruction. These 3 bit pattern locations are named "rd", "rs" and "K" on the Instruction Format file.

CASE 2: We can also look at the definition of "class T1rm16mov". It uses the same instruction pattern, but in this case it is a STORE instruction. The only difference is the values that get assigned to the instruction opcode and operands.

For Case 2, the LLVM system finds that the first variable rd does not match with a name in the Instruction Info, so it assigns the first value of $dst destination memory (so the destination base register) to it. Then finds that rs matches with a variable named $rs so uses it and assigns it as source register. The third variable is K, which also has no name in the Instruction, so it gets assigned the second value of $dst, which is the offset. All fields get assigned correctly.

For Case 1, the system does not work if I use $rd for the destination register (instead of $dst) and an arbitrary name such as $src for the source memory location. In theory it should do the same as Case 2. First assign rd to $rd, then assign the remaining two, rs and K, in this order, to the register destination base and offset available on the $src memory location, so the instruction should be created correctly. However, due to a bug, the system fails to increment the variable counter after assigning rs. It keeps the same counter value for the assignation of rs and K, as a consequence, the instruction encoding gets messed up with wrong contents in K. It's a very subtle bug that is related with the fact that $src is in reality a single operand, although it has two values, after the match of a previous variable just happened. On the other hand, if $rd is not defined and I use an arbitrary name such as $dst, then all three variables are assigned correctly.

I am unable to show now the buggy source code in the LLMV system because I found it while executing code step by step on the debugger and I don't even recall in which exact source file it was, but it is part of the MCCodeEmitter generation step of TableGen. The (Target)GenMCCodeEmiter c++ file that is created from the Instruction Format and Info files is wrong; particularly this excerpt:

Code:
case CPU74::MOVm8r16:
    case CPU74::MOVmr16:
    case CPU74::MOVmr8: {
      // op: rd
      op = getMachineOpValue(MI, MI.getOperand(0), Fixups, STI);
      Value |= op & UINT64_C(15);
      // op: rs
      op = getMachineOpValue(MI, MI.getOperand(1), Fixups, STI);
      Value |= (op & UINT64_C(15)) << 4;
      // op: K
      op = getMachineOpValue(MI, MI.getOperand(1), Fixups, STI); 
      Value |= (op & UINT64_C(65535)) << 16;
      break;
    }


The third call to "op = getMachineOpValue(MI, MI.getOperand(1), Fixups, STI);" should be:

Code:
op = getMachineOpValue(MI, MI.getOperand(2), Fixups, STI);


With the described workaround this code is generated correctly.
But, anyway, these kind of things are not particularly interesting, and definitely not funny.

Joan.


Thu Mar 28, 2019 2:12 pm
Profile

Joined: Wed Apr 24, 2013 9:40 pm
Posts: 177
Location: Huntsville, AL
Thanks for the detailed explanation. Perhaps it'll stick in my memory, and I can refer to it if I encounter a similar situation in the future as I continue toward a complete implementation of a compiler using the LLVM toolset.

On the matter of interest and humor, I totally agree, but I suppose it is a result of the open source nature and breadth of the LLVM toolset.

I hope you've been able to submit a bug report to the LLVM maintenance team, so someone else, doing work like you, does not have to go through process that you described to reach a "solution".

_________________
Michael A.


Thu Mar 28, 2019 2:37 pm
Profile

Joined: Sat Feb 02, 2013 9:40 am
Posts: 920
Location: Canada
Thanks for the heads-up on the binding bug.

A while ago, I got an LLVM backend partially written for FT64, migrated from the RiSCV backend which wasn’t finished at the time I installed the compiler tools. I decided however to get the processor working reliably in an FPGA before taking on the task of finishing an LLVM compiler. For now, I use a compiler written myself which lacks a lot of fancy optimizations, but works well enough to write or port code in C.

_________________
Robert Finch http://www.finitron.ca


Fri Mar 29, 2019 5:26 am
Profile WWW

Joined: Wed Jan 09, 2013 6:54 pm
Posts: 1258
When I first heard of LLVM I was very excited, because it could be a breath of fresh air and a new clean codebase compared to gcc's very long history. But it's certainly very complex, and - thanks for the story - quite a challenge, quite an adventure to work with it. I've only taken a very cursory glance, and it was entirely beyond me to engage.


Fri Mar 29, 2019 8:19 am
Profile
User avatar

Joined: Fri Mar 22, 2019 8:03 am
Posts: 168
Location: Girona-Catalonia
Thanks all for your comments.

To be honest, I initially thought that the LLVM thing was more user friendly than it actually is. I certainly had looked at the existing backends and found there were a lot of code for them, but I assumed that this was only because of fine details or peculiarities of the target processors (x86 and ARM are the main offenders). I imagined that by choosing a simple compiler friendly architecture, there should not be a need to implement that much code (or at all). But it turns to not be exactly like that.

However, I do not fancy writing my own compiler because I have already done so in the past and I regard that as sort of repeating something. I mean, I would do it if it was required for a job, but not so much as a hobby. The compilers that I implemented were just recursive top-down parsers that simply emit machine code that essentially mimic the sentence flow of the high level programming language. At most, I implemented simple target specific machine optimisations, but I never went into the sophistication of target independent optimisations and much less like the advanced ones that the LLVM-CLang front end is able to perform.

When I look at the high quality output of commercial compilers, and the great improvements that many years of team work have put into it, I get to the conclusion that I must use that, instead of recycling one of my crude compilers. Furthermore, compilers as good as that already existed in the early 80's, or maybe earlier, I certainly recall that the default C compiler available for the VAX-11 during the 80's, produced excellent assembly code, and most of the optimisations available on modern compilers are already 35 or 40 years ago technology. So I think, why my future 74xx processor wouldn't take advantage of that, if possible?.

This only means that I'm persisting with the LLVM. I have already achieved some early goals, and I hope that I will eventually come out with a working compiler of reasonable quality. What I discard for now (possibly forever) is the generation of .o object files for the linker. I consider this overkill because I really only need to create assembly files, that I can easily parse to create some form of machine code file that I can move to the processor.

After that, I guess that I will need a lot of help to get started with the hardware. I think I understand the basics of synchronous logic, but hardware is really where I lack a lot of knowledge, and I will surely show a lot of naivety, but I'm excited about eventually getting to that step.

Joan


Fri Mar 29, 2019 11:56 pm
Profile
Display posts from previous:  Sort by  
Reply to topic   [ 203 posts ]  Go to page Previous  1, 2, 3, 4, 5 ... 14  Next

Who is online

Users browsing this forum: No registered users and 3 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
cron
Powered by phpBB® Forum Software © phpBB Group
Designed by ST Software