Markets plunge in China, Hong Kong as first salvoes fired in Sino-US trade war

Margin Financing Facility is a credit line granted to clients by Kim Eng Securities Hong Kong Limited to finance the purchase of securities. It is secured by a pledge of cash and /or marginable shares as collateral and is extended to both individual and corporate investors.

Junior Share Builders Plan: Almost all securities quoted on the Hong Kong Stock Exchange are marginable. The standard specifies that the correct encoding of a code point use only the minimum number of bytes required to hold the significant bits of the code point.

Latest Business Headlines

Author’s Note: Many of the comments below will no longer apply to this updated post. Because of U.S. estate tax rules, I created the model portfolio on this post .

The first two red cells C0 and C1 could be used only for a two-byte encoding of a 7-bit ASCII character which should be encoded in one byte; as described below such "overlong" sequences are disallowed.

Pink cells are the leading bytes for a sequence of multiple bytes, of which some, but not all, possible continuation sequences are valid. E0 and F0 could start overlong encodings, in this case the lowest non-overlong-encoded code point is shown.

In principle, it would be possible to inflate the number of bytes in an encoding by padding the code point with leading 0s. This is called an overlong encoding.

The standard specifies that the correct encoding of a code point use only the minimum number of bytes required to hold the significant bits of the code point. Longer encodings are called overlong and are not valid UTF-8 representations of the code point. This rule maintains a one-to-one correspondence between code points and their valid encodings, so that there is a unique valid encoding for each code point.

This ensures that string comparisons and searches are well-defined. This allows the byte 00 to be used as a string terminator. Many of the first UTF-8 decoders would decode these, ignoring incorrect bits and accepting overlong results. This guarantees that it will neither interpret nor emit an ill-formed code unit sequence. An initial reaction was to design decoders to throw exceptions on errors. Early versions of Python 3.

The inability to deal with UTF-8 without first confirming it was valid actually greatly impeded adoption of Unicode. Modern practice is to replace errors with a replacement character, and to insure that systems do not interpret these replacement characters in any dangerous way.

The errors can be detected later when it is convenient to report an error, or display as blocks when the string is drawn for the user. Replacement requires defining how many bytes are in the error. Early decoders would often use the same number of bytes as the lead byte indicated as the length of the error.

This had the unfortunate problem that a dropped byte would cause the error to consume some of the next character s. It also was difficult to parse in a reverse direction. In these decoders 0xE0,0x80,0x80 is three errors, not one.

This means an error is no more than three bytes long and never contains the start of a valid character. Another popular practice is to turn each byte into an error. In this case 0xE1,0xA0,0xC0 is three errors, not two. The primary advantage is that there are now only different error bytes.

This allows the decoder to define different error replacements such as:. The large number of invalid byte sequences provides the advantage of making it easy to have a program accept both UTF-8 and legacy encodings such as ISO Software can check for UTF-8 correctness, and if that fails assume the input to be in the legacy encoding.

To preserve these invalid UTF sequences, their corresponding UTF-8 encodings are sometimes allowed by implementations despite the above rule.

This spelling is used in all the Unicode Consortium documents relating to the encoding. Other descriptions, such as those that omit the hyphen or replace it with a space, i. Supported Windows versions, i. Windows 7 and later, have codepage , as a synonym for UTF-8 with better support than in older Windows , [27] and Microsoft has a script for Windows 10, to enable it by default for its notepad program.

The following implementations show slight differences from the UTF-8 specification. In such programs each half of a UTF surrogate pair is encoded as its own three-byte UTF-8 encoding, resulting in six-byte sequences rather than four bytes for characters outside the Basic Multilingual Plane. Although this non-optimal encoding is generally not deliberate, a supposed benefit is that it preserves UTF binary sorting order when CESU-8 is binary sorted.

In normal usage, the Java programming language supports standard UTF-8 when reading and writing strings through InputStreamReader and OutputStreamWriter if it is the platform's default character set or as requested by the program. However it uses Modified UTF-8 for object serialization [32] among other applications of DataInput and DataOutput , for the Java Native Interface , [33] and for embedding constant strings in class files.

Many systems that deal with UTF-8 work this way without considering it a different encoding, as it is simpler. The term "WTF-8" has also been used humorously to refer to erroneously doubly-encoded UTF-8 [38] [39] sometimes with the implication that CP bytes are the only ones encoded.

Software that is not aware of multi-byte encodings will display the BOM as three garbage characters at the start of the document, e. The Unicode Standard neither requires nor recommends the use of the BOM for UTF-8, but warns that it may be encountered at the start of a file as a transcoding artifact.

By early , the search was on for a good byte stream encoding of multi-byte character sets. The draft ISO standard contained a non-required annex called UTF-1 that provided a byte stream encoding of its bit code points. This encoding was not satisfactory on performance grounds, among other problems, and the biggest problem was probably that it did not have a clear separation between ASCII and non-ASCII: The table below was derived from a textual description in the annex.

Dave Prosser of Unix System Laboratories submitted a proposal for one that had faster implementation characteristics and introduced the improvement that 7-bit ASCII characters would only represent themselves; all multi-byte sequences would include only bytes where the high bit was set. A modification by Ken Thompson of the Plan 9 operating system group at Bell Labs made it somewhat less bit-efficient than the previous proposal but crucially allowed it to be self-synchronizing , letting a reader start anywhere and immediately detect byte sequence boundaries.

It also abandoned the use of biases and instead added the rule that only the shortest possible encoding is allowed; the additional loss in compactness is relatively insignificant, but readers now have to look out for invalid encodings to avoid reliability and especially security issues.

Thompson's design was outlined on September 2, , on a placemat in a New Jersey diner with Rob Pike. They are all the same in their general mechanics, with the main differences being on issues such as allowed range of code point values and safe handling of invalid input.

From Wikipedia, the free encyclopedia. Get a financial plan worth thousands of pounds — for free. Will I need to pay stamp duty? Could this new insurance policy fix the 'loyalty penalty' problem?

Four ways to eliminate the gender pension gap and get women investing Lauren Davidson. Sponsored Is equity release safe? Sponsored Six ways to save energy this holiday season. Sponsored Alternative ways to be more giving this festive season.

Sponsored How is equity release evolving? We've noticed you're adblocking. We rely on advertising to help fund our award-winning journalism.