Integer floats with remainder theory : update 2020-10 : Useful for project acceleration

QuantumHelos
QuantumHelos
Joined: 5 Nov 17
Posts: 190
Credit: 64621856
RAC: 23292
Topic 223777

Integer floats with remainder theory - copyright RS

The relevance of integer floats is that we can do 2 things:Float on Integer instruction sets at half resolution 32Bit Int > 16Bit.16Bit > 24Bit.8Bit > 28Bit.4BitRemainder theorem is the capacity to convert back and forth with data.

RAM/Memory and hard drive storage are major components & we also need to consider compression & color formats like DOT5

Machine learning & Server improvement: Files attached please utilise.

https://is.gd/ProcessorLasso

https://is.gd/3DMLSorcerer

**

Integer_Float Remainder op Form 1:(c)RS

The float is formed of 2 integers...

One being the integer and the remainder being the floating component....

Thusly we need two integers per float for example 2 32bit integers will make one single float instruction....

Integer A : Remainder B

A + B = float
(A + B) x (A²+B²)

= Float C dislocating A and B by a certain number of places = a float that travels as the integer. 

Expansion data sets:

A1 : B1
A2 : B2
Ar : Br

F1 : Bf1
F2 : Bf2
Fr : Bfr

A : Integer
F : Float
r : Remainder

The data set expansion can be infinite and the expansion of the data set doubles the precision,
With the remainder... infinite computation = infinite precision.

Not only that but the computation can be executed as an Integer or as a float or indeed with both.
Relevance is that on computers there are a lot of integer registers; Float also..
Also the data can be compressed in ram without using larger buffer widths.

(copyright) Rupert Summerskill

COP-Roll : (c)Rupert S
ROLL Operation Syntax : RS :(Integer & Float)
Processing Cache (displacement) Operation Roll Arithmetic Maths : For
Multiplication, Division, Addition & Subtraction : P-COR-SAM

*
Addressable by Compiler update, Firmware update, CPU/GPU Rules
Firmware, Bios, Operating System & Program/Machine Learning.

Machine Learning will considerably enhance Cache & Processor routine
operations efficiency & make rules for all developers & firmware
creators.

AI Machine Learning Optimization : https://is.gd/3DMLSorcerer
*

In a single loop a multiply of a float point precision of under 1 for
example 0.00001 requires that:

In Integer float :

Multiply of a sum such as 15.05 * 3 is 2 operations:
(15 x 3) + ((roll 0.05 left 2 places)*3) = R=(5 x 3) + 45

In other words : 2 storage values R remainder (the float component) &
the number,
However multiplication of a float such as 0.01 is a division in one
example & a multiply roll in another,

Roll is a memory operation in CPU terms & is a single processor loop push

In all operations where division is banned we have to decide whether
the operation is multiples or division of base value 10 or 1,10,100>,

Such an operation can be carried out by addition or subtraction or
roll, Values such as 200* ,
Require multiple additions under the multiply is banned principle.

Multiple sets of memory arrays in a series parallel is the equivalent
of multiplication through addition,

Subtraction through addition requires inverting the power phase of a
single component array.

Thus we are able to addition and subtraction all sums ? traditional
math solves have done this before,
Roll operations are our fast way to multiply;

However arrays of addition & subtraction are a (logical fast loop)..
Full Operation in a single cycle, Because there is no sideways roll.

However direct memory displacement between 010100 & 101000 can use a
cache to displace a 1,
Such an arrangement such as a 4 digit displacement cache to roll the
operation on memory transfer.

Displace on operation (cycle 2) does minimize operations.

Having that cache further up the binary pipeline does reduce the
number of roll cache modifier buffers that we need,

However the time we save & the time we lose & the CPU space we lose or
gain.. depends specifically how limited the Roll Cache is.

Integer_Float Remainder op Form 2:(c)RS

32Bit (2x16Bit) is the most logical for 32Bit registers
64Bit (2x32Bit) is the most logical for 32Bit registers

Byte Swap operation
Byte Inversion operation

For example DWord: 8

2 x DWord: 8 Bit Integer & 8 Bit 4 roll places & 4 Bit Value.
Displacing the value 4 bits in 8 makes the value an integer,
Alternatively Adaptive maths adds 0 as for example multiplication & removes it afterwards..
The usage of adaptation takes the second DWord & effectively makes it an accurate remainder.

In that example i believe one less operation is needed in the 16Bit example,

Operation example 2 uses an embedded multiply x 10 &  divide after (to get resulting float)

32Bit memory space: 2x 16Bit Value, 1 Integer 16Bit & 1 0. value,
That can effectively be displaced 16 decimal places

The maths required as displayed above require inverting Multiply & Division,
For Mul & Div Ops on remainder; However does not when used finally:
In the FLOAT Unit FPU for large precision maths

This allows fully Integer CPU to do Float maths and store them as integer..
Both allowing fully the use of all registers & also storage as purely Integer_Float,
It also allows Full cache usage for SiMD,AVX & Vector Units.

Byte Inversion simply allows Byte Swap & Inversion to fully realize performance improvements..
& Also Byte Inversion maths.

SiMD,AVX,Vector : ByteSwap,Invert,Mul,Div ecetera Ergo Float compatible & Acceleration
Float : High Precision finalization .. Lower Frequency = More potential
Integer + Byte Functions : Pure Acceleration with minimal loss Core Function utilization

This is all algebra; Categorically.

(c) Rupert S https://science.n-helix.com

Optimisation & Use:

https://science.n-helix.com/2019/05/compiler-optimisation.html

https://science.n-helix.com/2020/01/float-hlsl-spir-v-compiler-role.html

Compiler books & reading : https://science.n-helix.com/2017/04/boinc.html

http://science.n-helix.com/2020/02/fpu-double-precision.html

https://science.n-helix.com/2019/06/kernel.html

https://science.n-helix.com/2019/06/vulkan-stack.html

http://science.n-helix.com/2018/09/hpc-pack-install-guide.html

HDR Pure flow (c)RS
data is converted from the OS to gpu in compressed & optimized memory data into dithered & optimized smooth; precise rendering in every compatible monitor and other device..
the reason we do this is flow control and optimization of the final output of the devices, also the main chunk of data the os used is transparently the best,
In 5D,4D,3D & 2D data and can thusly be pre compressed and cache optimized & rendered.