Opcode/Instruction | Op/En | 64/32 Bit Mode Support | CPUID Feature Flag | Description |
---|---|---|---|---|
VEX.128.66.0F3A.W0 1D /r ib VCVTPS2PH xmm1/m64, xmm2, imm8 | A | V/V | F16C | Convert four packed single-precision floating-point values in xmm2 to packed half-precision (16-bit) floating-point values in xmm1/m64. Imm8 provides rounding controls. |
VEX.256.66.0F3A.W0 1D /r ib VCVTPS2PH xmm1/m128, ymm2, imm8 | A | V/V | F16C | Convert eight packed single-precision floating-point values in ymm2 to packed half-precision (16-bit) floating-point values in xmm1/m128. Imm8 provides rounding controls. |
EVEX.128.66.0F3A.W0 1D /r ib VCVTPS2PH xmm1/m64 {k1}{z}, xmm2, imm8 | B | V/V | AVX512VL AVX512F | Convert four packed single-precision floating-point values in xmm2 to packed half-precision (16-bit) floating-point values in xmm1/m64. Imm8 provides rounding controls. |
EVEX.256.66.0F3A.W0 1D /r ib VCVTPS2PH xmm1/m128 {k1}{z}, ymm2, imm8 | B | V/V | AVX512VL AVX512F | Convert eight packed single-precision floating-point values in ymm2 to packed half-precision (16-bit) floating-point values in xmm1/m128. Imm8 provides rounding controls. |
EVEX.512.66.0F3A.W0 1D /r ib VCVTPS2PH ymm1/m256 {k1}{z}, zmm2{sae}, imm8 | B | V/V | AVX512F | Convert sixteen packed single-precision floating-point values in zmm2 to packed half-precision (16-bit) floating-point values in ymm1/m256. Imm8 provides rounding controls. |
Op/En | Tuple Type | Operand 1 | Operand 2 | Operand 3 | Operand 4 |
---|---|---|---|---|---|
A | N/A | ModRM:r/m (w) | ModRM:reg (r) | imm8 | N/A |
B | Half Mem | ModRM:r/m (w) | ModRM:reg (r) | imm8 | N/A |
Convert packed single-precision floating values in the source operand to half-precision (16-bit) floating-point values and store to the destination operand. The rounding mode is specified using the immediate field (imm8).
Underflow results (i.e., tiny results) are converted to denormals. MXCSR.FTZ is ignored. If a source element is denormal relative to the input format with DM masked and at least one of PM or UM unmasked; a SIMD exception will be raised with DE, UE and PE set.
The immediate byte defines several bit fields that control rounding operation. The effect and encoding of the RC field are listed in Table 5-13.
Bits | Field Name/value | Description | Comment |
---|---|---|---|
Imm[1:0] | RC=00B | Round to nearest even | If Imm[2] = 0 |
RC=01B | Round down | ||
RC=10B | Round up | ||
RC=11B | Truncate | ||
Imm[2] | MS1=0 | Use imm[1:0] for rounding | Ignore MXCSR.RC |
MS1=1 | Use MXCSR.RC for rounding | ||
Imm[7:3] | Ignored | Ignored by processor |
VEX.128 version: The source operand is a XMM register. The destination operand is a XMM register or 64-bit memory location. If the destination operand is a register then the upper bits (MAXVL-1:64) of corresponding register are zeroed.
VEX.256 version: The source operand is a YMM register. The destination operand is a XMM register or 128-bit memory location. If the destination operand is a register, the upper bits (MAXVL-1:128) of the corresponding destination register are zeroed.
Note: VEX.vvvv and EVEX.vvvv are reserved (must be 1111b).
EVEX encoded versions: The source operand is a ZMM/YMM/XMM register. The destination operand is a YMM/XMM/XMM (low 64-bits) register or a 256/128/64-bit memory location, conditionally updated with writemask k1. Bits (MAXVL-1:256/128/64) of the corresponding destination register are zeroed.
vCvt_s2h(SRC1[31:0]) { IF Imm[2] = 0 THEN ; using Imm[1:0] for rounding control, see Table 5-13 RETURN Cvt_Single_Precision_To_Half_Precision_FP_Imm(SRC1[31:0]); ELSE ; using MXCSR.RC for rounding control RETURN Cvt_Single_Precision_To_Half_Precision_FP_Mxcsr(SRC1[31:0]); FI; }
(KL, VL) = (4, 128), (8, 256), (16, 512) FOR j := 0 TO KL-1 i := j * 16 k := j * 32 IF k1[j] OR *no writemask* THEN DEST[i+15:i] := vCvt_s2h(SRC[k+31:k]) ELSE IF *merging-masking* ; merging-masking THEN *DEST[i+15:i] remains unchanged* ELSE ; zeroing-masking DEST[i+15:i] := 0 FI FI; ENDFOR DEST[MAXVL-1:VL/2] := 0
(KL, VL) = (4, 128), (8, 256), (16, 512) FOR j := 0 TO KL-1 i := j * 16 k := j * 32 IF k1[j] OR *no writemask* THEN DEST[i+15:i] := vCvt_s2h(SRC[k+31:k]) ELSE *DEST[i+15:i] remains unchanged* ; merging-masking FI; ENDFOR
DEST[15:0] := vCvt_s2h(SRC1[31:0]); DEST[31:16] := vCvt_s2h(SRC1[63:32]); DEST[47:32] := vCvt_s2h(SRC1[95:64]); DEST[63:48] := vCvt_s2h(SRC1[127:96]); DEST[79:64] := vCvt_s2h(SRC1[159:128]); DEST[95:80] := vCvt_s2h(SRC1[191:160]); DEST[111:96] := vCvt_s2h(SRC1[223:192]); DEST[127:112] := vCvt_s2h(SRC1[255:224]); DEST[MAXVL-1:128] := 0
DEST[15:0] := vCvt_s2h(SRC1[31:0]); DEST[31:16] := vCvt_s2h(SRC1[63:32]); DEST[47:32] := vCvt_s2h(SRC1[95:64]); DEST[63:48] := vCvt_s2h(SRC1[127:96]); DEST[MAXVL-1:64] := 0
None.
VCVTPS2PH __m256i _mm512_cvtps_ph(__m512 a);
VCVTPS2PH __m256i _mm512_mask_cvtps_ph(__m256i s, __mmask16 k,__m512 a);
VCVTPS2PH __m256i _mm512_maskz_cvtps_ph(__mmask16 k,__m512 a);
VCVTPS2PH __m256i _mm512_cvt_roundps_ph(__m512 a, const int imm);
VCVTPS2PH __m256i _mm512_mask_cvt_roundps_ph(__m256i s, __mmask16 k,__m512 a, const int imm);
VCVTPS2PH __m256i _mm512_maskz_cvt_roundps_ph(__mmask16 k,__m512 a, const int imm);
VCVTPS2PH __m128i _mm256_mask_cvtps_ph(__m128i s, __mmask8 k,__m256 a);
VCVTPS2PH __m128i _mm256_maskz_cvtps_ph(__mmask8 k,__m256 a);
VCVTPS2PH __m128i _mm_mask_cvtps_ph(__m128i s, __mmask8 k,__m128 a);
VCVTPS2PH __m128i _mm_maskz_cvtps_ph(__mmask8 k,__m128 a);
VCVTPS2PH __m128i _mm_cvtps_ph ( __m128 m1, const int imm);
VCVTPS2PH __m128i _mm256_cvtps_ph(__m256 m1, const int imm);
Invalid, Underflow, Overflow, Precision, Denormal (if MXCSR.DAZ=0).
VEX-encoded instructions, see Table 2-26, “Type 11 Class Exception Conditions” (do not report #AC);
EVEX-encoded instructions, see Table 2-60, “Type E11 Class Exception Conditions.”
Additionally:
#UD | If VEX.W=1. |
#UD | If VEX.vvvv != 1111B or EVEX.vvvv != 1111B. |