How much electricity (Voltage and Amperage) for how much time must flow in order for a chip or processor to take it as a "1"
Binary data comprises changes of voltage at some point in a circuit between the voltage level for a ‘0’, to some other voltage for a ‘1’. The '1' voltage can be greater than the '0' (positive logic), or less (negative logic). The voltage level change that the input of a device sees as a change in logic level between 0 and 1 depends on the IC that is receiving the data. In the case of 4000 series CMOS gates, that is nominally 50% of the supply voltage, so lower than that is a ‘0’ and higher is a ‘1’.
There is no specific time for an individual 0 or 1 (a ‘bit’). Data is usually sent from one device to another at standardised speeds, ie. at anything from 75bps (bits per second) or multiples of that, ie. 150, 300, 600, 1200bps etc.
How is machine code (binary/0 1) physically represented?
I mean lets say a sting of code 111110110000111
I assume you mean ‘how is it drawn on paper’. Sending a long string of binary bits is unwieldy and prone to errors, so data is normally sent in groups of 8bits called ‘bytes’. Alpha-numerics are represented according to the ASCII codes. The waveform of a byte consisting of 01101010 would be shown as shown in Fig1. The logic 1 level is shown as +5v, but it may be any voltage.
Depending on the protocol used, a byte may be sent with the most significant bit (MSB) or the least significant bit (LSB) first.
If a byte were to be shown as text, it would be split into two 4bit ‘nibbles’; each being shown as a numeric from 0 to 9 or as a letter from A to F for 10 to 15. So 11100100 would be shown as E4.
if there are consequent ones (11111) is there a method/need to wait for the electricity to ground from one 1 to an other?
It’s all a matter of timing. The receiver must know whether data is positive or negative logic, the speed it is being sent at, and the starting point. In the case of Fig1 the receiver would not see the beginning of the first 0 (at A) because it is at the base level of 0V. The data line normally sits at logic 1 level (+5V) and data would be preceded with a ‘start bit’ of logic 0 (0V) to identify the first bit, A ‘1’ is also added as an end bit to ensure the line was high after the byte. So each byte would be sent as 10bits of data, as in Fig2.
This is how the receiver reads the data. To keep figures in nice round numbers, let’s assume a rate of 500bps. So each bit is 2ms long in time. Let’s also assume that the data is being received with a microcontroller.
The micro is waiting for the start bit to occur. As soon as it is received (B) the micro starts a 1ms timer. At the end of that time (C) the data will be half way through the start bit. The micro reads the level of the data at that time (known as ‘clocking in’ the bit). Then it sets its timer to give 8 periods of 2ms each and checks the data level after each 2ms period (D), to clock in the 8 bits. It then waits for the end of the stop bit (a ‘1’) and then looks for the next start bit and clocks in the next byte – and so on till the end of the data.
And how are errors prevented? (so to not mistake 111 with 11 or 1111 or 101 with 1001 etc)
The simplest way to check a byte is by using a ‘parity’ bit. An extra 0 or 1 bit is added to the byte, depending on whether there are an odd or even number of ones in the byte. That will detect a single error, but not when there are two errors which leave the number of ones unchanged. A much better way is to use a ‘checksum’ where all the bytes in the data are added and the result is given in a single byte at the end of the data stream.
Googling for ‘wiki ascii code’ or ‘wiki parity byte’ or ‘wiki checksum’ will give more information on those topics.
I hope that helps.