Vhdl Code For Serial Data Transmitters

Brief Information About the JESD204B IP Core Item Description Release Information Version 17.1 Release Date November 2017 Ordering Code IP-JESD204B Product ID 0116 Vendor ID 6AF7 IP Core Information Protocol Features • Joint Electron Device Engineering Council (JEDEC) JESD204B.01, 2012 standard release specification • Device subclass: • Subclass 0—Backwards compatible to JESD204A. • Subclass 1—Uses SYSREF signal to support deterministic latency. • Subclass 2—Uses SYNC_N detection to support deterministic latency. The JESD204B IP core is a high-speed point-to-point serial interface for digital-to-analog (DAC) or analog-to-digital (ADC) converters to transfer data to FPGA devices. This unidirectional serial interface runs at a maximum data rate of 16.0 Gbps. This protocol offers higher bandwidth, low I/O count and supports scalability in both number of lanes and data rates.
The JESD204B IP core addresses multi-device synchronization by introducing Subclass 1 and Subclass 2 to achieve deterministic latency. The JESD204B IP core incorporates: • Media access control (MAC)—data link layer (DLL) block that controls the link states and character replacement. • Physical layer (PHY)—physical coding sublayer (PCS) and physical media attachment (PMA) block. The JESD204B IP core does not incorporate the Transport Layer (TL) that controls the frame assembly and disassembly. The TL and test components are provided as part of a design example component where you can customize the design for different converter devices.

Key features of the JESD204B IP core: • Data rate of up to 16.0 Gbps (uncharacterized support) • Run-time JESD204B parameter configuration (L, M, F, S, N, K, CS, CF) • MAC and PHY partitioning for portability • Subclass 0 mode for backward compatibility to JESD204A • Subclass 1 mode for deterministic latency support (using SYSREF) between the ADC/DAC and logic device • Subclass 2 mode for deterministic latency support (using SYNC_N) between the ADC/DAC and logic device • Multi-device synchronization. The JESD204B IP core supports TX-only, RX-only, and Duplex (TX and RX) mode. The IP core is a unidirectional protocol where interfacing to ADC utilizes the transceiver RX path and interfacing to DAC utilizes the transceiver TX path. The JESD204B IP core generates a single link with a single lane and up to a maximum of 8 lanes. If there are two ADC links that need to be synchronized, you have to generate two JESD204B IP cores and then manage the deterministic latency and synchronization signals, like SYSREF and SYNC_N, at your custom wrapper level. The JESD204B IP core supports duplex mode only if the LMF configuration for ADC (RX) is the same as DAC (TX) and with the same data rate.
This use case is mainly for prototyping with internal serial loopback mode. This is because typically as a unidirectional protocol, the LMF configuration of converter devices for both DAC and ADC are not identical. IP Core Variation. The JESD204B IP core has three core variations: • JESD204B MAC only • JESD204B PHY only • JESD204B MAC and PHY In a subsystem where there are multiple ADC and DAC converters, you need to use the Intel ® Quartus ® Prime software to merge the transceivers and group them into the transceiver architecture. For example, to create two instances of the JESD204B TX IP core with four lanes each and four instances of the JESD204 RX IP core with two lanes each, you can apply one of the following options: • MAC and PHY option • Generate JESD204B TX IP core with four lanes and JESD204B RX IP core with two lanes.
Radio Electronics Pages. General information. The laws regarding the use of the radiospectrum are actually fairly uniform, and established byinternational treaty, for. Concordia University.
• Instantiate the desired components. • Use the Intel ® Quartus ® Prime software to merge the PHY lanes. • MAC only and PHY only option—based on the configuration above, there are a total of eight lanes in duplex mode. • Generate the JESD204B Duplex PHY with a total of eight lanes. (TX skew is reduced in this configuration as the channels are bonded). • Generate the JESD204B TX MAC with four lanes and instantiate it two times.
• Generate the JESD204B RX MAC with two lanes and instantiate it four times. • Create a wrapper to connect the JESD204B TX MAC and RX MAC with the JESD204B Duplex PHY. Note: For Intel ® Stratix ® 10 devices, run-time access for certain registers have been disabled. Refer to the TX and RX register map for more information.
The most critical parameters that must be set correctly during IP generation are the L and F parameters. Parameter L denotes the maximum lanes supported while parameter F denotes the size of the deskew buffer needed for deterministic latency.
The hardware generates during parameterization, which means that run-time programmability can only fall back from the parameterized and generated hardware, but not beyond the parameterized IP core. You can use run-time configuration for prototyping or evaluating the performance of converter devices with various LMF configurations. However, in actual production, Intel ® recommends that you generate the JESD204B IP core with the intended LMF to get an optimized gate count. For example, if a converter device supports LMF = 442 and LMF = 222, to check the performance for both configurations, you need to generate the JESD204B IP core with maximum F and L, which is L = 4 and F = 2. During operation, you can use the fall back configuration to disable the lanes that are not used in LMF = 222 mode. You must ensure that other JESD204B configurations like M, N, S, CS, CF, and HD do not violate the parameter F setting. You can access the Configuration and Status Register (CSR) space to modify other configurations such as: • K (multi-frame) • device and lane IDs • enable or disable scrambler • enable or disable character replacement.
F Parameter This parameter indicates how many octets per frame per lane that the JESD204B link is operating in. You must set the F parameter according to the JESD204B IP Specification for a correct data mapping. To support the High Density (HD) data format, the JESD204B IP core tracks the start of frame and end of frame because F can be either an odd or even number.
The start of frame and start of multi-frame wrap around the 32-bits data width architecture. The RX IP core outputs the start of frame ( sof[3:0]) and start of multiframe ( somf[3:0]), which act as markers, using the Avalon-ST data stream. Based on these markers, the transport layer build the frames.
In a simpler system where the HD data format is set to 0, the F will always be 1, 2, 4, 6, 8, and so forth. This simplifies the transport layer design, so you do not need to use the sof[3:0] and somf[3:0] markers. The channel bonding mode that you select may contribute to the transmitter channel-to-channel skew. A bonded transmitter datapath clocking provides low channel-to-channel skew as compared to non-bonded channel configurations. For Intel ® Arria ® 10 and Intel ® Stratix ® 10 devices, refer to PMA Bonding chapter of the Intel ® Arria ® 10 Transceiver PHY User Guide and Intel ® Stratix ® 10 Transceiver PHY User Guide respectively, on how to connect the ATX PLL and fPLL in bonded configuration and non-bonded configuration.
For the non-bonded configuration, refer to Multi-Channel x1/xN Non-Bonded Example. For bonded configuration, refer to Implementing x6/xN Bonding Mode. • In PHY-only mode, you can generate up to 32 channels, provided that the channels are on the same side.
In MAC and PHY integrated mode, you can generate up to 8 channels. • In bonded channel configuration, the lower transceiver clock skew for all channels result in a lower channel-to-channel skew.
• For Stratix V, Arria V, and Cyclone V devices, you must use contiguous channels when you select bonded mode. The JESD204B IP core automatically selects between ×6, ×N or feedback compensation (fb_compensation) bonding depending on the number of transceiver channels you set. • For Intel ® Arria ® 10 and Intel ® Stratix ® 10 devices, you do not have to place the channels in bonded group contiguously. Refer to for the clock network selection. Refer to Channel Bonding section of the Intel ® Arria ® 10 Transceiver PHY User Guide or Intel ® Stratix ® 10 Transceiver PHY User Guide for more information about PMA Bonding.
• In non-bonded channel configuration, the transceiver clock skew is higher and latency is unequal in the transmitter phase compensation FIFO for each channel. This may result in a higher channel-to-channel skew. Note: The resource utilization data are extracted from a full design which includes the Intel ® FPGA Transceiver PHY Reset Controller IP Core. Thus, the actual resource utilization for the JESD204B IP core should be smaller by about 15 ALMs and 20 registers. Intel and strategic IP partners offer a broad portfolio of configurable IP cores optimized for Intel FPGA devices. The Intel ® Quartus ® Prime software installation includes the Intel ® FPGA IP library. Integrate optimized and verified Intel ® FPGA IP cores into your design to shorten design cycles and maximize performance.
The Intel ® Quartus ® Prime software also supports integration of IP cores from other sources. Use the IP Catalog ( Tools >IP Catalog) to efficiently parameterize and generate synthesis and simulation files for your custom IP variation. The Intel ® FPGA IP library includes the following types of IP cores: • Basic functions • DSP functions • Interface protocols • Low power functions • Memory interfaces and controllers • Processors and peripherals This document provides basic information about parameterizing, generating, upgrading, and simulating stand-alone IP cores in the Intel ® Quartus ® Prime software.
The Intel ® Quartus ® Prime software installation includes the Intel ® FPGA IP library. This library provides many useful IP cores for your production use without the need for an additional license. Some Intel ® FPGA IP cores require purchase of a separate license for production use.
The Intel ® FPGA IP Evaluation Mode allows you to evaluate these licensed Intel ® FPGA IP cores in simulation and hardware, before deciding to purchase a full production IP core license. You only need to purchase a full production license for licensed Intel ® IP cores after you complete hardware testing and are ready to use the IP in production. The Intel ® Quartus ® Prime software installs IP cores in the following locations by default. • Simulate the behavior of a licensed Intel ® FPGA IP core in your system. • Verify the functionality, size, and speed of the IP core quickly and easily. • Generate time-limited device programming files for designs that include IP cores. • Program a device with your IP core and verify your design in hardware.
Intel ® FPGA IP Evaluation Mode supports the following operation modes: • Tethered—Allows running the design containing the licensed Intel ® FPGA IP indefinitely with a connection between your board and the host computer. Tethered mode requires a serial joint test action group (JTAG) cable connected between the JTAG port on your board and the host computer, which is running the Intel ® Quartus ® Prime Programmer for the duration of the hardware evaluation period. The Programmer only requires a minimum installation of the Intel ® Quartus ® Prime software, and requires no Intel ® Quartus ® Prime license. The host computer controls the evaluation time by sending a periodic signal to the device via the JTAG port. If all licensed IP cores in the design support tethered mode, the evaluation time runs until any IP core evaluation expires. If all of the IP cores support unlimited evaluation time, the device does not time-out.
• Untethered—Allows running the design containing the licensed IP for a limited time. The IP core reverts to untethered mode if the device disconnects from the host computer running the Intel ® Quartus ® Prime software. The IP core also reverts to untethered mode if any other licensed IP core in the design does not support tethered mode. When the evaluation time expires for any licensed Intel ® FPGA IP in the design, the design stops functioning. All IP cores that use the Intel ® FPGA IP Evaluation Mode time out simultaneously when any IP core in the design times out.
When the evaluation time expires, you must reprogram the FPGA device before continuing hardware verification. To extend use of the IP core for production, purchase a full production license for the IP core. You must purchase the license and generate a full production license key before you can generate an unrestricted device programming file. During Intel ® FPGA IP Evaluation Mode, the Compiler only generates a time-limited device programming file ( _time_limited.sof) that expires at the time limit. Note: Refer to each IP core's user guide for parameterization steps and implementation details.
Intel ® licenses IP cores on a per-seat, perpetual basis. The license fee includes first-year maintenance and support. You must renew the maintenance contract to receive updates, bug fixes, and technical support beyond the first year. You must purchase a full production license for Intel ® FPGA IP cores that require a production license, before generating programming files that you may use for an unlimited time. During Intel ® FPGA IP Evaluation Mode, the Compiler only generates a time-limited device programming file ( _time_limited.sof) that expires at the time limit.
To obtain your production license keys, visit the or contact your local. The govern the installation and use of licensed IP cores, the Intel ® Quartus ® Prime design software, and all unlicensed IP cores.
Note: Upgrading IP cores may append a unique identifier to the original IP core entity names, without similarly modifying the IP instance name. There is no requirement to update these entity references in any supporting Intel ® Quartus ® Prime file, such as the Intel ® Quartus ® Prime Settings File (.qsf), Synopsys ® Design Constraints File (.sdc), or Signal Tap File (.stp), if these files contain instance names. The Intel ® Quartus ® Prime software reads only the instance name and ignores the entity name in paths that specify both names.
Use only instance names in assignments. Indicates that upgrade is optional for this IP variation in the current version of the Intel ® Quartus ® Prime software. You can upgrade this IP variation to take advantage of the latest development of this IP core. Alternatively, you can retain previous IP core characteristics by declining to upgrade. Refer to the Description for details about IP core version differences. If you do not upgrade the IP, the IP variation synthesis and simulation files are unchanged and you cannot modify parameters until upgrading. IP Upgrade Required.
Indicates that the current version of the Intel ® Quartus ® Prime software does not support compilation of your IP variation. This can occur if another edition of the Intel ® Quartus ® Prime software, such as the Intel ® Quartus ® Prime Standard Edition, generated this IP. Replace this IP component with a compatible component in the current edition. Follow these steps to upgrade IP cores: • In the latest version of the Intel ® Quartus ® Prime software, open the Intel ® Quartus ® Prime project containing an outdated IP core variation. The Upgrade IP Components dialog box automatically displays the status of IP cores in your project, along with instructions for upgrading each core. To access this dialog box manually, click Project >Upgrade IP Components.
• To upgrade one or more IP cores that support automatic upgrade, ensure that you turn on the Auto Upgrade option for the IP cores, and click Auto Upgrade Perform Automatic Upgrade. The Status and Version columns update when upgrade is complete. Example designs that any Intel ® FPGA IP core provides regenerate automatically whenever you upgrade an IP core. • To manually upgrade an individual IP core, select the IP core and click Upgrade in Editor (or simply double-click the IP core name). The parameter editor opens, allowing you to adjust parameters and regenerate the latest version of the IP core. • Filter IP Catalog to Show IP for active device family or Show IP for all device families. If you have no project open, select the Device Family in IP Catalog.
• Type in the Search field to locate any full or partial IP core name in IP Catalog. • Right-click an IP core name in IP Catalog to display details about supported devices, to open the IP core's installation folder, and for links to IP documentation.
• Click Search for Partner IP to access partner IP information on the web. The parameter editor prompts you to specify an IP variation name, optional ports, and output file generation options. The parameter editor generates a top-level Intel ® Quartus ® Prime IP file (.ip) for an IP variation in Intel ® Quartus ® Prime Pro Edition projects.
The parameter editor generates a top-level Quartus IP file (.qip) for an IP variation in Intel ® Quartus ® Prime Standard Edition projects. These files represent the IP variation in the project, and store parameterization information.
You can create a new Intel ® Quartus ® Prime project with the New Project Wizard. This process allows you to: • specify the working directory for the project. • assign the project name. • designate the name of the top-level design entity.
• Launch the Intel ® Quartus ® Prime software. • On the File menu, click New Project Wizard. • In the New Project Wizard: Directory, Name, Top-Level Entity page, specify the working directory, project name, and top-level design entity name.
• In the New Project Wizard: Add Files page, select the existing design files (if any) you want to include in the project. • In the New Project Wizard: Family & Device Settings page, select the device family and specific device you want to target for compilation. • In the EDA Tool Settings page, select the EDA tools you want to use with the Intel ® Quartus ® Prime software to develop your project.
• Review the summary of your chosen settings in the New Project Wizard window, then click Finish to complete the Intel ® Quartus ® Prime project creation. Refer to for the IP core parameter values and description. • In the IP Catalog ( Tools >IP Catalog), locate and double-click the JESD204B IP core. • Specify a top-level name for your custom IP variation. This name identifies the IP core variation files in your project.
If prompted, also specify the target Intel ® FPGA device family and output file HDL preference. The testbench and scripts are located in the /ip_sim folder. The Generate Example Design option generates supporting files for the following entities: • IP core for simulation—refer to • IP core design example for simulation—refer to Generating and Simulating the Design Example section in the Design Examples for JESD204B IP Core User Guide. • IP core design example for synthesis—refer to Compiling the JESD204B IP Core Design Example section in the Design Examples for JESD204B IP Core User Guide. • Click Finish or Generate HDL to generate synthesis and other optional files matching your IP variation specifications.
The parameter editor generates the top-level.qip or.qsys IP variation file and HDL files for synthesis and simulation. You can integrate the JESD204B IP core with other Platform Designer components within Platform Designer.
You can connect standard interfaces like clock, reset, Avalon-MM, Avalon-ST, HSSI bonded clock, HSSI serial clock, and interrupt interfaces within Platform Designer. However, for conduit interfaces, you are advised to export all those interfaces and handle them outside of Platform Designer. This is because conduit interfaces are not part of the standard interfaces. Thus, there is no guarantee on compatibility between different conduit interfaces. When you generate the JESD204B IP core variation, the Intel ® Quartus ® Prime software generates a Synopsys Design Constraints File (.sdc) that specifies the timing constraints for the input clocks to your IP core.
When you generate the JESD204B IP core, your design is not yet complete and the JESD204B IP core is not yet connected in the design. The final clock names and paths are not yet known. Therefore, the Intel ® Quartus ® Prime software cannot incorporate the final signal names in the.sdc file that it automatically generates. Instead, you must manually modify the clock signal names in this file to integrate these constraints with the timing constraints for your full design. This section describes how to integrate the timing constraints that the Intel ® Quartus ® Prime software generates with your IP core into the timing constraints for your design. The Intel ® Quartus ® Prime software automatically generates the altera_jesd204.sdc file that contains the JESD204B IP core's timing constraints. Three clocks are created at the input clock port: • JESD204B TX IP core: • txlink_clk • reconfig_to_xcvr[0] (for Arria V, Cyclone V, and Stratix V devices only) • reconfig_clk (for Intel ® Arria ® 10 and Intel ® Stratix ® 10 devices only) • tx_avs_clk • JESD204B RX IP core: • rxlink_clk • reconfig_to_xcvr[0] (for Arria V, Cyclone V, and Stratix V devices only) • reconfig_clk (for Intel ® Arria ® 10 and Intel ® Stratix ® 10 devices only) • rx_avs_clk In a functional system design, these clocks (except for reconfig_to_xcvr[0] clock) are typically provided by the core PLL.
In the.sdc file for your project, make the following command changes: • Specify the PLL clock reference pin frequency using the create_clock command. • Derive the PLL generated output clocks from the Intel ® FPGA PLL IP Core (for Arria V, Cyclone V and Stratix V) or Intel ® FPGA I/O PLL IP Core (for Intel ® Arria ® 10) using the derive_pll_clocks command.
• For Intel ® Stratix ® 10, Intel ® FPGA I/O PLL IP Core has SDC file which derives the PLL clocks based on your PLL configurations. • Comment out the create_clock commands for the txlink_clk, reconfig_to_xcvr[0] or reconfig_clk, and tx_avs_clk, rxlink_clk, and rx_avs_clk clocks in the altera_jesd204.sdc file. • Identify the base and generated clock name that correlates to the txlink_clk, reconfig_clk, and tx_avs_clk, rxlink_clk, and rx_avs_clk clocks using the report_clock command. • Describe the relationship between base and generated clocks in the design using the set_clock_groups command. After you complete your design, you must modify the clock names in your. Trek Speed Concept Di2 Manual. sdc file to the full-design clock names, taking into account both the IP core instance name in the full design, and the design hierarchy.
Be careful when adding the timing exceptions based on your design, for example, when the JESD204B IP core handles asynchronous timing between the txlink_clk, rxlink_clk, pll_ref_clk, tx_avs_clk, rx_avs_clk, and reconfig_clk (for Intel ® Arria ® 10 and Intel ® Stratix ® 10 only) clocks. The table below shows an example of clock names in the altera_jesd204.sdc and input clock names in the user design. In this example, there is a dedicated input clock for the transceiver TX PLL and CDR at the refclk pin. The device_clk is the input to the core PLL clkin pin.
The IP core and transceiver Avalon-MM interfaces have separate external clock sources with different frequencies. JESD204B IP Core Parameters Parameter Value Description Main Tab Device Family • Arria V • Arria V GZ • Intel ® Arria ® 10 • Cyclone V • Stratix V • Intel ® Stratix ® 10 Select the targeted device family. JESD204B Wrapper • Base Only • PHY Only • Both Base and PHY Select the JESD204B wrapper.
• Base Only—generates the DLL only. • PHY Only—generates the transceiver PHY layer only (soft and hard PCS). • Both Base and PHY—generates both DLL and transceiver PHY layers. Data Path • Receiver • Transmitter • Duplex Select the operation modes.
This selection enables or disables the receiver and transmitter supporting logic. • RX—instantiates the receiver to interface to the ADC. • TX—instantiates the transmitter to interface to the DAC.
• Duplex—instantiates the receiver and transmitter to interface to both the ADC and DAC. JESD204B Subclass • 0 • 1 • 2 Select the JESD204B subclass modes. • 0—Set subclass 0 • 1—Set subclass 1 • 2—Set subclass 2 Data Rate 1.0–16.0 Set the data rate for each lane. • Cyclone V—1.0 Gbps to 5.0 Gbps • Arria V—1.0 Gbps to 7.5 Gbps • Arria V GZ—2.0 Gbps to 9.9 Gbps • Intel ® Arria ® 10—2.0 Gbps to 15.0 Gbps • Stratix V—2.0 Gbps to 12.5 Gbps • Intel ® Stratix ® 10—2.0 Gbps to 16.0 Gbps PCS Option • Enabled Hard PCS • Enabled Soft PCS • Enabled PMA Direct Select the PCS modes. • Enabled Hard PCS—utilize Hard PCS components. Select this option to minimize resource utilization with data rate that supports up to the limitation of the Hard PCS.
Note: For this setting, you utilize PMA Direct mode with 80 bits PMA width. PLL Type • CMU • ATX Select the Phase-Locked Loop (PLL) types, depending on the FPGA device family. This parameter is not applicable to Intel ® Arria ® 10 and Intel ® Stratix ® 10 devices. • Cylone V—CMU • Arria V—CMU • Stratix V—CMU, ATX Bonding Mode • Bonded • Non-bonded Select the bonding modes. • Bonded—select this option to minimize inter-lanes skew for the transmitter datapath.
• Non-bonded—select this option to disable inter-lanes skew control for the transmitter datapath. Note: For Stratix V, Arria V, Arria V GZ and Cyclone V devices, the bonding type is automatically selected based on the device family and number of lanes that you set. PLL/CDR Reference Clock Frequency Variable Set the transceiver reference clock frequency for PLL or CDR. • For Stratix V, Arria V, Arria V GZ and Cyclone V devices, the frequency range available for you to choose depends on the PLL type and data rate that you select. • For Intel ® Stratix ® 10 and Intel ® Arria ® 10 devices, the frequency range available for you to choose depends on the data rate.
Enable Bit reversal and Byte reversal On, Off Turn on this option to set the data transmission order in MSB-first serialization. If this option is off, the data transmission order is in LSB-first serialization. Enable Transceiver Dynamic Reconfiguration On, Off Turn on this option to enable dynamic data rate change. When you enable this option, you need to connect the reconfiguration interface to the transceiver reconfiguration controller. For Intel ® Arria ® 10 and Intel ® Stratix ® 10 devices, turn on this option to enable the Transceiver Native PHY reconfiguration interface.
Enable Altera Debug Master Endpoint On, Off Turn on this option for the Transceiver Native PHY IP core to include an embedded Altera Debug Master Endpoint (ADME). This ADME connects internally to the Avalon-MM slave interface of the Transceiver Native PHY and can access the reconfiguration space of the transceiver. It can perform certain test and debug functions via JTAG using System Console.
This parameter is valid only for Intel ® Arria ® 10 and Intel ® Stratix ® 10 devices and when you turn on the Enable Transceiver Dynamic Reconfiguration parameter. Share Reconfiguration Interface On, Off When enabled, Transceiver Native PHY presents a single Avalon-MM slave interface for dynamic reconfiguration of all channels.
In this configuration the upper address bits ( Intel ® Stratix ® 10: [log 2+10:11]; Intel ® Arria ® 10: [log 2+9:10]) of the reconfiguration address bus specify the selected channel. Address bits ( Intel ® Stratix ® 10: [10:0]; Intel ® Arria ® 10: [9:0]) provide the register offset address within the reconfiguration space of the selected channel. L is the number of channel. When disabled, the Native PHY IP core provides an independent reconfiguration interface for each channel. For example, when a reconfiguration interface is not shared for a four-channel Native PHY IP instance, reconfig_address[9:0] corresponds to the reconfiguration address bus of logical channel 0, reconfig_address[19:10] correspond to the reconfiguration address bus of logical channel 1, reconfig_address[29:20] corresponds to the reconfiguration address bus of logical channel 2, and reconfig_address[39:30] correspond to the reconfiguration address bus of logical channel 3. For configurations using more than one channel, this option must be enabled when Altera Debug Master Endpoint is enabled.
Provide Separate Reconfiguration Interface for Each Channel On, Off When enabled, transceiver dynamic reconfiguration interface presents separate clock, reset, and Avalon-MM slave interface for each channel instead of a single wide bus. This option is only available when Share Reconfiguration Interface is turned off. Enable Capability Registers On, Off Turn on this option to enable capability registers, which provides high level information about the transceiver channel's configuration. Set user-defined IP identifier 0–255 Set a user-defined numeric identifier that can be read from the user identifer offset when you turn on the Enable Capability Registers parameter. Enable Control and Status Registers On, Off Turn on this option to enable soft registers for reading status signals and writing control signals on the PHY interface through the embedded debug. Signals include rx_is_locktoref, rx_is_locktodata, tx_cal_busy, rx_cal_busy, rx_serial_loopback, set_rx_locktodata, set_rx_locktoref, tx_analogreset, tx_digitalreset, rx_analogreset, and rx_digitalrest. For more information, refer to the Intel ® Arria ® 10 or Intel ® Stratix ® 10 Transceiver User Guide.
Enable Prbs Soft Accumulators On, Off Turn on this option to set the soft logic to perform PRBS bit and error accumulation when using the hard PRBS generator and checker. JESD204B Configurations Tab Lanes per converter device (L) 1–8 Set the number of lanes per converter device. Converters per device (M) 1–256 Set the number of converters per converter device. Enable manual F configuration On, Off Turn on this option to set parameter F in manual mode and enable this parameter to be configurable. Otherwise, the parameter F is in derived mode. You have to enable this parameter and configure the appropriate F value if the transport layer in your design is supporting Control Word (CF) or High Density format(HD), or both.
Set the number of frames per multiframe. This value is dependent on the value of F and is derived using the following constraints: • The value of K must fall within the range of 17/F. Generated Files Extension Description.
Vhd IP core variation file, which defines a VHDL or Verilog HDL description of the custom IP core. Instantiate the entity defined by this file inside of your design. Include this file when compiling your design in the Intel ® Quartus ® Prime software..cmp A VHDL component declaration file for the IP core variation.
Add the contents of this file to any VHDL architecture that instantiates the IP core..sdc Contains timing constraints for your IP core variation..qip Contains Intel ® Quartus ® Prime project information for your IP core variation..tcl Tcl script file to run in Intel ® Quartus ® Prime software..sip Contains IP core library mapping information required by the Intel ® Quartus ® Prime software.The Intel ® Quartus ® Prime software generates a. Sip file during generation of some Intel ® FPGA IP cores. You must add any generated.sip file to your project for use by NativeLink simulation and the Intel ® Quartus ® Prime Archiver..spd Contains a list of required simulation files for your IP core. JESD204B IP Core Testbench. To run the Tcl script using the Intel ® Quartus ® Prime sofware, follow these steps: • Launch the Intel ® Quartus ® Prime software.
• On the View menu, click Utility Windows >Tcl Console. • In the Tcl Console, type cd /ip_sim to go to the specified directory. • Type source gen_sim_verilog.tcl (Verilog) or source gen_sim_vhdl.tcl (VHDL) to generate the simulation files. To run the Tcl script using the command line, follow these steps: • Obtain the Intel ® Quartus ® Prime software resource.
• Type cd /ip_sim to go to the specified directory. • Type quartus_sh -t gen_sim_verilog.tcl (Verilog) or quartus_sh -t gen_sim_vhdl.tcl (VHDL) to generate the simulation files. Simulating the IP Core Testbench. Simulation Setup Scripts Simulator File Directory Script ModelSim ®- Intel ® FPGA SE/AE /ip_sim/testbench/setup_scripts/mentor msim_setup.tcl VCS /ip_sim/testbench/setup_scripts/synopsys/vcs vcs_setup.sh VCS MX /ip_sim/testbench/setup_scripts/synopsys/vcsmx vcsmx_setup.sh synopsys_sim.setup Aldec Riviera /ip_sim/testbench/setup_scripts/aldec rivierapro_setup.tcl Cadence /ip_sim/testbench/setup_scripts/cadence ncsim_setup.sh. Simulation Run Scripts Simulator File Directory Script ModelSim- Intel ® FPGA SE/AE /ip_sim/testbench/mentor run_altera_jesd204_tb.tcl VCS /ip_sim/testbench/synopsys/vcs run_altera_jesd204_tb.sh VCS MX /ip_sim/testbench/synopsys/vcsmx run_altera_jesd204_tb.sh Aldec Riviera /ip_sim/testbench/aldec run_altera_jesd204_tb.tcl Cadence /ip_sim/testbench/cadence run_altera_jesd204_tb.sh To simulate the testbench design using the ModelSim- Intel ® FPGA, follow these steps: • Launch the ModelSim- Intel ® FPGA. • On the File menu, click Change Directory >Select /ip_sim/testbench/. • On the File menu, click Load >Macro file.
Select run_altera_jesd204_tb.tcl. This file compiles the design and runs the simulation automatically, providing a pass/fail indication on completion. To simulate the testbench design using the Aldec Riviera-PRO simulator, follow these steps: • Launch the Aldec Riviera-PRO simulator. • On the File menu, click Change Directory >Select /ip_sim/testbench/. • On the Tool menu, click Execute Macro. Select run_altera_jesd204_tb.tcl. This file compiles the design and runs the simulation automatically, providing a pass/fail indication on completion.
To simulate the testbench design using the VCS, VCS MX (in Linux), or Cadence simulator, follow these steps: • Launch the VCS, VCS MX, or Cadence simulator. • On the File menu, click Change Directory >Select /ip_sim/testbench/. • Run the run_altera_jesd204_tb.sh file. This file compiles the design and runs the simulation automatically, providing a pass/fail indication on completion. The JESD204B testbench simulation flow: • At the start, the system is under reset (all the components are in reset). • After 100 ns, the Transceiver Reset Controller IP core power up and wait for the tx_ready and rx_ready signal from the Transceiver Reset Controller IP to assert.
• After 500ns The reset signal of the JESD204B TX Avalon-MM interface is released (go HIGH). At the next positive edge of the link_clk signal, the JESD204B TX link powers up by releasing its reset signal. • The JESD204B TX link starts transmitting K28.5 characters.
• The reset signal of the JESD204B RX Avalon-MM interface is released (go HIGH). At the next positive edge of the link_clk signal, the JESD204B RX link powers up by releasing its reset signal. • Once the link is out of reset, a SYSREF pulse is generated to reset the LMFC counter inside both the JESD204B TX and RX IP core. • When the txlink_ready signal is asserted, the packet generator starts sending packets to the TX datapath. • The packet checker starts comparing the packet sent from the TX datapath and received at the RX datapath after the rxlink_valid signal is asserted. • The testbench reports a pass or fail when all the packets are received and compared. The JESD204B IP core implements a transmitter (TX) and receiver (RX) block.
Each block has two layers and consists of the following components: • Media access control (MAC)—DLL block that consists of the link layer (link state machine and character replacement), CSR, Subclass 1 and 2 deterministic latency, scrambler or descrambler, and multiframe counter. • Physical layer (PHY)—PCS and PMA block that consists of the 8B/10B encoder, word aligner, serializer, and deserializer. You can specify the datapath and wrapper for your design and generate them separately. The TX and RX blocks in the DLL utilizes the Avalon-ST interface to transmit or receive data and the Avalon-MM interface to access the CSRs. The TX and RX blocks operate on 32-bit data width per channel, where the frame assembly packs the data into four octets per channel. Multiple TX and RX blocks can share the clock and reset if the link rates are the same. 32-Bits Architecture The JESD204B IP core consist of 32-bit internal datapath per lane.
This means that JESD204B IP Core expects the data samples to be assembled into 32-bit data (4 octets) per lane in the transport layer before sending the data to the Avalon-ST data bus. The JESD204 IP core operates in the link clock domain. The link clock runs at (data rate/40) because it is operating in 32-bit data bus after 8B/10B encoding. As the internal datapath of the core is 32-bits, the (F × K) value must be in the order of 4 to align the multi-frame length on a 32-bit boundary. Apart from this, the deterministic latency counter values such as LMFC counter, RX Buffer Delay (RBD) counter, and Subclass 2 adjustment counter is the link clock count instead of frame clock count. Avalon-MM Interface The Avalon-MM slave interface provides access to internal CSRs. The read and write data width is 32-bits (DWORD access).
The Avalon-MM slave is asynchronous to the txlink_clk, txframe_clk, rxlink_clk, and rxframe_clk clock domains. You are recommended to release the reset for the CSR configuration space first.
All run-time JESD204B configurations like L, F, M, N, N', CS, CF, and HD should be set before releasing the reset for link and frame clock domain. Each write transfer has a writeWaitTime of 0 cycle while a read transfer has a readWaitTime of 1 cycle and readLatency of 1 cycle. The CGS phase is achieved through the following process: • Upon reset, the converter device (RX) issues a synchronization request by driving SYNC_N low. The JESD204 TX IP core transmits a stream of /K/ = /K28.5/ symbols.
The receiver synchronizes when it receives four consecutive /K/ symbols. • For Subclass 0, the RX converter devices deassert SYNC_N signal at the frame boundary. After all receivers have deactivated their synchronization requests, the JESD204 TX IP core continues to emit /K/ symbols until the start of the next frame. The core proceeds to transmit ILAS data sequence or encoded user data if csr_lane_sync_en signal is disabled. • For Subclass 1 and 2, the RX converter devices deassert SYNC_N signal at the LMFC boundary. After all receivers deactivate the SYNC_N signal, the JESD204 TX IP core continues to transmit /K/ symbols until the next LMFC boundary.
At the next LMFC boundary, the JESD204B IP core transmits ILAS data sequence. (There is no programmability to use a later LMFC boundary.) TX ILAS. When lane alignment sequence is enabled through the csr_lane_sync_en register, the ILAS sequence is transmitted after the CGS phase. The ILAS phase takes up four multi-frames.
For Subclass 0 mode, you can program the CSR ( csr_ilas_multiframe) to extend the ILAS phase to a maximum of 256 multi-frames before transitioning to the encoded user data phase. The ILAS data is not scrambled regardless of whether scrambling is enabled or disabled. The multi-frame has the following structure: • Each multi-frame starts with a /R/ character (K28.0) and ends with a /A/ character (K28.3) • The second multi-frame transmits the ILAS configuration data. The multi-frame starts with /R/ character (K28.0), followed by /Q/ character (K28.4), and then followed by the link configuration data, which consists of 14 octets as illustrated in the table below.
It is then padded with dummy data and ends with /A/ character (K28.3), marking the end of multi-frame. • Dummy octets are an 8-bit counter and is always reset when it is not in ILAS phase. • For a configuration of more than four multi-frames, the multi-frame follows the same rule above and is padded with dummy data in between /R/ character and /A/ character.
Character replacement for non-scrambled data The character replacement for non-scrambled mode in the IP core follows these JESD204B specification rules: • At end of frame (not coinciding with end of multi-frame), which equals the last octet in the previous frame, the transmitter replaces the octet with /F/ character (K28.7). However, the original octet is encoded if an alignment character was transmitted in the previous frame. • At the end of a multi-frame, which equals to the last octet in the previous frame, the transmitter replaces the octet with /A/ character (K28.3), even if a control character was already transmitted in the previous frame. For devices that do not support lane synchronization, only /F/ character replacement is done.
At every end of frame, regardless of whether the end of multi-frame equals to the last octet in previous frame, the transmitter encodes the octet as /F/ character (K28.7) if it fits the rules above. Character replacement for scrambled data The character replacement for scrambled data in the IP core follows these JESD204B specification rules: • At end of frame (not coinciding with end of multi-frame), which equals to 0xFC (D28.7), the transmitter encodes the octet as /F/ character (K28.7). • At end of multi-frame, which equals to 0x7C, the transmitter replaces the current last octet as /A/ character (K28.3).
For devices that do not support lane synchronization, only /F/ character replacement is done. At every end of frame, regardless of whether the end of multi-frame equals to 0xFC (D28.7), the transmitter encodes the octet as /F/ character (K28.7) if it fits the rules above. TX PHY Layer.
The 8B/10B encoder encodes the data before transmitting them through the serial line. The 8B/10B encoding has sufficient bit transition density (3-8 transitions per 10-bit symbol) to allow clock recovery by the receiver. The control characters in this scheme allow the receiver to: • synchronize to 10-bit boundary.
• insert special character to mark the start and end of frames and start and end of multi-frames. • detect single bit errors. The JESD204 IP core supports transmission order from MSB first as well as LSB first.
For MSB first transmission, the serialization of the left-most bit of 8B/10B code group (bit 'a') is transmitted first. The receiver block includes the following modules: • RX CSR—manages the configuration and status registers.
• RX_CTL—manages the SYNC_N signal, state machine that controls the data link layer states, LMFC, and also the buffer release, which is crucial for deterministic latency throughout the link. • RX Scrambler and Data Link Layer—takes in 32-bits of data that decodes the ILAS, performs descrambling, character replacement as per the JESD204B specification, and error detection (code group error, frame and lane realignment error). The CGS phase is the link up phase that monitors the detection of /K28.5/ character.
The CGS phase is achieved through the following process: • Once the word boundary is aligned, the RX PHY layer detects the /K28.5/ 20-bit boundary and indicate that the character is valid. • The receiver deasserts SYNC_N on the next frame boundary (for Subclass 0) or on the next LMFC boundary (for Subclass 1 and 2) after the reception of four successive /K/ characters. • After correct reception of another four 8B/10B characters, the receiver assumes full code group synchronization. Error detected in this state machine is the code group error.
Code group error always trigger link reinitialization through the assertion of SYNC_N signal and this cannot be disabled through the CSR. The CS state machine is defined as CS_INIT, CS_CHECK, and CS_DATA. • The minimum duration for a synchronization request on the SYNC_N is five frames plus nine octets. Frame Synchronization. After CGS phase, the receiver assumes that the first non-/K28.5/ character marks the start of frame and multi-frame.
If the transmitter emits an initial lane alignment sequence, the first non-/K28.5/ character is /K28.0/. Similar to the JESD204 TX IP core, the csr_lane_sync_en is set to 1 by default, thus the RX core detects the /K/ character to /R/ character transition. If the csr_lane_sync_en is set to 0, the RX core detects the /K/ character to the first data transition.
An ILAS error and unexpected /K/ character is flagged if either one of these conditions are violated. When csr_lane_sync_en is set to 0, you have to disable data checking for the first 16 octets of data as the character replacement block takes 16 octets to recover the end-of-frame pointer for character replacement. When csr_lane_sync_en is set to 1 (default JESD204B setting), the number of octets to be discarded depends on the scrambler or descrambler block.
The receiver assumes that a new frame starts in every F octets. The octet counter is used for frame alignment and lane alignment. The frame alignment is monitored through the alignment character /F/. The transmitter inserts this character at the end of frame. The /A/ character indicates the end of multi-frame. The character replacement algorithm depends on whether scrambling is enabled or disabled, regardless of the csr_lane_sync_en register setting.
The alignment detection process: • If two successive valid alignment characters are detected in the same position other than the assumed end of frame—without receiving a valid or invalid alignment character at the expected position between two alignment characters—the receiver realigns its frame to the new position of the received alignment characters. • If lane realignment can result in frame alignment error, the receiver issues an error. In the JESD204 RX IP core, the same flexible buffer is used for frame and lane alignment. Lane realignment gives a correct frame alignment because lane alignment character doubles as a frame alignment character. A frame realignment can cause an incorrect lane alignment or link latency. The course of action is for the RX to request for reinitialization through SYNC_N. After the frame synchronization phase has entered FS_DATA, the lane alignment is monitored via /A/ character (/K28.3/) at the end of multi-frame.
The first /A/ detection in the ILAS phase is important for the RX core to determine the minimum RX buffer release for inter-lane alignment. There are two types of error that is detected in lane alignment phase: • Arrival of /A/ character from multiple lanes exceed one multi-frame. • Misalignment detected during user data phase. The realignment rules for lane alignment are similar to frame alignment: • If two successive and valid /A/ characters are detected at the same position other than the assumed end of multi-frame—without receiving a valid/invalid /A/ character at the expected position between two /A/ characters—the receiver aligns the lane to the position of the newly received /A/ characters. • If a recent frame alignment causes the loss of lane alignment, the receiver realigns the lane frame—which is already at the position of the first received /A/ character—at the unexpected position.
The JESD204 RX IP core captures 14 octets of link configuration data that are transmitted on the 2 nd multi-frame of the ILAS phase. The receiver waits for the reception of /Q/ character that marks the start of link configuration data and then latch it into ILAS octets, which are per lane basis. You can read the 14 octets captured in the link configuration data through the CSR.
You need to first set the csr_ilas_data_sel register to select which link configuration data lane it is trying to read from. Then, proceed to read from the csr_ilas_octet register. Initial Lane Synchronization. The receivers in Subclass 1 and Subclass 2 modes store data in a memory buffer (Subclass 0 mode does not store data in the buffer but immediately releases them on the frame boundary as soon as the latest lane arrives.). The RX IP core detects the start of multi-frame of user data per lane and then wait for the latest lane data to arrive. The latest data is reported as RBD count ( csr_rbd_count) value which you can read from the status register.
This is the earliest release opportunity of the data from the deskew FIFO (referred to as RBD offset). The JESD204 RX IP core supports RBD release at 0 offset and also provides programmable offset through RBD count.
By default, the RBD release can be programmed through the csr_rbd_offset to release at the LMFC boundary. If you want to implement an early release mechanism, program it in the csr_rbd_offset register. The csr_rbd_offset and csr_rbd_count is a counter based on the link clock boundary (not frame clock boundary). Therefore, the RBD release opportunity is at every four octets. The word aligner block identifies the MSB and LSB boundaries of the 10-bit character from the serial bit stream. Manual alignment is set because the /K/ character must be detected in either LSB first or MSB first mode. When the programmed word alignment pattern is detected in the current word boundary, the PCS indicates a valid pattern in the rx_sync_status (mapped as pcs_valid to the IP core).
The code synchronization state is detected after the detection of the /K/ character boundary for all lanes. In a normal operation, whenever synchronization is lost, JESD204 RX IP core always return back to the CS_INIT state where the word alignment is initiated.
For debug purposes, you can bypass this alignment by setting the csr_patternalign_en register to 0. The 8B/10B decoder decode the data after receiving the data through the serial line. The JESD204 IP core supports transmission order from MSB first as well as LSB first.
The PHY layer can detect 8B/10B not-in-table (NIT) error and also running disparity error. Example of SYSREF Frequency Calculation. In this example, you can choose to perform one of the following options: • provide two SYSREF and device clock, where the ADC groups share both the device clock and SYSREF (18.75 MHz and 9.375 MHz) • provide one SYSREF (running at 9.375 MHz) and device clock for all the ADC and DAC groups because the SYSREF period in the DAC is a multiplication of n integer.
Group Configuration SYSREF Frequency ADC Group 1 (2 ADCs) • LMF = 222 • K = 16 • Data rate = 6 Gbps (6 GHz / 40) / (2 x 16 / 4) = 18.75 MHz ADC Group 2 (2 ADCs) • LMF = 811 • K = 32 • Data rate = 6 Gbps (6 GHz / 40) / (1 x 32 / 4) = 18.75 MHz DAC Group 3 (2 DACs) • LMF = 222 • K = 16 • Data rate = 3 Gbps (3 GHz / 40) / (2 x 16 / 4) = 9.375 MHz Subclass 2 Operating Mode. The JESD204 IP core maintains a LMFC counter that counts from 0 to (F × K/4)–1 and wraps around again. The LMFC count starts upon reset and the logic device always acts as the timing master. To support Subclass 2 for multi-link device, you must deassert the resets for all JESD204B IP core links synchronously at the same clock edge. This deassertion ensures that the internal LMFC vaunter is aligner across multi-link.
The converters adjust their own internal LMFC to match the master's counter. The alignment of LMFC within the system relies on the correct alignment of SYNC_N signal deassertion at the LMFC boundary. The alignment of LMFC to RX logic is handled within the TX converter.
The RX logic releases SYNC_N at the LMFC tick and the TX converter adjust its internal LMFC to match the RX LMFC. For the alignment of LMFC to the TX logic, the JESD204 TX IP core samples SYNC_N from the DAC receiver and reports the relative phase difference between the DAC and TX logic device LMFC in the TX CSR ( dbg_phadj, dbg_adjdir, and dbg_adjcnt). Based on the reported value, you can calculate the adjustment required. Then, to initiate the link reinitialization through the CSR, set the value in the TX CSR ( csr_phadj, csr_adjdir, and csr_adjcnt). The values on the phase adjustment are embedded in bytes 1 and 2 of the ILAS sequence that is sent to the DAC during link initialization. On the reception of the ILAS, the DAC adjusts its LMFC phase by step count value and sends back an error report with the new LMFC phase information.
This process may be repeated until the LMFC at the DAC and the logic device are aligned. Dbg_phadj, dbg_adjdir and dbg_adjcnt Values for Different SYNC_N Deassertions Case SYNC_N Signal Deassertion dbg_phadj Value dbg_adjdir Value dbg_adjcnt Value 1 Happens at LMFC boundary 0 — — 2 Happens at LMFC count value that is equals or less than half of FxK/4 value 1 0 Number of link clock cycles from the LMFC boundary to the detection of SYNC_N signal deassertion 3 Happens at LMFC count value that is more than half of FxK/4 value 1 1 Number of link clock cycles from detection of the SYNC_N signal deassertion to the next LMFC boundary. Both the scrambler and descrambler are designed in a 32-bit parallel implementation and the scrambling/descrambling order starts from first octet with MSB first. The JESD204 TX and RX IP core support scrambling by implementing a 32-bit parallel scrambler in each lane. The scrambler and descrambler are located in the JESD204 IP MAC interfacing to the Avalon-ST interface. You can enable or disable scrambling and this option applies to all lanes.
Mixed mode operation, where scrambling is enabled for some lanes, is not permitted. The scrambling polynomial: 1 + x 14 + x 15 The descrambler can self-synchronize in eight octets. In a typical application where the reset value of the scrambler seed is different from the converter device to FPGA logic device, the correct user data is recovered in the receiver in two link clocks (due to the 32-bit architecture). The PRBS pattern checker on the transport layer should always disable checking of the first eight octets from the JESD204 RX IP core. SYNC_N Signal.
For Subclass 0 implementation, the SYNC_N signal from the DAC converters in the same group path must be combined. In some applications, multiple converters are grouped together in the same group path to sample a signal (referred as multipoint link). The FPGA can only start the LMFC counter and its transition to ILAS after all the links deassert the synchronization request. The JESD204B TX IP core provides three signals to facilitate this application.
The SYNC_N is the direct signal from the DAC converters. The error signaling from SYNC_N is filtered and sent out as dev_sync_n signal. For Subclass 0, you need to multiplex all the dev_sync_n signals in the same multipoint link and then input them to the IP core through mdev_sync_n signal. For Subclass 1 implementation, you may choose to combine or not to combine the SYNC_N signal from the converter device. Truecaller Free Download For Samsung Galaxy Y.
If you implement two ADC converter devices as a multipoint link and one of the converter is unable to link up, the functional link still operates. You must manage the trace length for the SYSREF signal and also the differential pair to minimize skew. The SYNC_N is the direct signal from the DAC converters. The error signaling from SYNC_N is filtered and sent out as dev_sync_n output signal. The dev_sync_n signal from the JESD204B TX IP core must loopback into the mdev_sync_n signal of the same instance without combining the SYNC_N signal. Apart from that, you must set the same RBD offset value ( csr_rbd_offset) to all the JESD204B RX IP cores within the same multipoint link for the RBD release (the latest lane arrival for each of the links). The JESD204 RX IP core deskews and outputs the data when the RBD offset value is met.
The total latency is consistent in the system and is also the same across multiple resets. Setting a different RBD offset to each link or setting an early release does not guarantee deterministic latency and data alignment.
The JESD204B TX and RX IP core support link reinitialization. There are two modes of entry for link reinitialization: • Hardware initiated link reinitialization: • For TX, the reception of SYNC_N for more than five frames and nine octets triggers link reinitialization.
• For RX, the loss of code group synchronization, frame alignment and lane alignment errors cause the IP core to assert SYNC_N and request for link reinitialization. • Software initiated link reinitialization—both the TX and RX IP core allow software to request for link reinitialization.
• For TX, the IP core transmits /K/ character and wait for the receiver to assert SYNC_N to indicate that it has entered CS_INIT state. • For RX, the IP core asserts SYNC_N to request for link reinitialization. Hardware initiated link reinitialization can be globally disabled through the csr_link_reinit_disable register for debug purposes.
Hardware initiated link reinitialization can be issued as interrupt depending on the error type and interrupt error enable. If lane misalignment has been detected as a result of a phase change in local timing reference, the software can rely on this interrupt trigger to initiates a LMFC realignment. The realignment process occurs by first resampling SYSREF and then issuing a link reinitialization request. Link Startup Sequence. TX (Subclass 2) Similar to Subclass 1 mode, the JESD204B TX IP core is in CGS phase upon reset deassertion. The LMFC alignment between the converter and IP core starts after SYNC_N deassertion.
The JESD204B TX IP core detects the deassertion of SYNC_N and compares the timing to its own LMFC. The required adjustment in the link clock domain is updated in the register map. You need to update the final phase adjustment value in the registers for it to transfer the value to the converter during the ILAS phase. The DAC adjusts the LMFC phase and acknowledge the phase change with an error report. This error report contains the new DAC LMFC phase information, which allows the loop to iterate until the phase between them is aligned. RX (Subclass 1) The JESD204B RX IP core drives and holds the SYNC_N ( dev_sync_n signal) low when it is in reset.
Upon reset deassertion, the JESD204B RX IP core checks if there is sufficient /K/ character to move its state machine out of synchronization request. The IP core also ensures that at least one SYSREF rising edge is sampled before deasserting SYNC_N. This is to prevent a race condition where the SYNC_N is deasserted based on internal free-running LMFC count instead of the updated LMFC count after SYSREF is sampled. The JESD204 TX IP core can detect error reporting through SYNC_N when SYNC_N is asserted for two frame clock periods (if F >= 2) or four frame clock periods (if F = 1).
When the downstream device reports an error through SYNC_N, the TX IP core issues an interrupt. The TX IP core samples the SYNC_N pulse width using the link clock. For a special case of F = 1, two frame clock periods are less than one link clock. Therefore, the error signaling from the receiver may be lost. You must program the converter device to extend the SYNC_N pulse to four frame clocks when F = 1.
The JESD204 RX IP core does not report an error through SYNC_N signaling. Instead, the RX IP core issues an interrupt when any error is detected. You can check the csr_tx_err, csr_rx_err0, and csr_rx_err1 register status to determine the error types. Clocking Scheme.
JESD204B IP Core Clocks Clock Signal Formula Description TX/RX Device Clock: pll_ref_clk PLL selection during IP core generation The PLL reference clock used by the TX Transceiver PLL or RX CDR. This is also the recommended reference clock to the Intel ® FPGA PLL IP Core (for Arria V or Stratix V devices) or Intel ® FPGA IOPLL (for Arria10 devices).
TX/RX Link Clock: txlink_clk rxlink_clk Data rate/40 The timing reference for the JESD204B IP core.The link clock runs at data rate/40 because the IP core operates in a 32-bit data bus architecture after 8B/10B encoding. For Subclass 1, to avoid half link clock latency variation, you must supply the device clock at the same frequency as the link clock. The JESD204B transport layer in the design example requires both the link clock and frame clock to be synchronous. TX/RX Frame Clock (in design example): txframe_clk rxframe_clk Data rate/(10 × F) The frame clock as per the JESD204B specification. This clock is applicable to the JESD204B transport layer and other upstream devices that run in frame clock such as the PRBS generator/checker or any data processing blocks that run at the same rate as the frame clock. The JESD204B transport layer in the design example also supports running the frame clock in half rate or quarter rate by using the FRAMECLK_DIV parameter. The JESD204B transport layer requires both the link clock and frame clock to be synchronous.
For more information, refer to the F1/F2_FRAMECLK_DIV parameter description and its relationship to the frame clock in the JESD204B IP Core Design Example User Guide. TX/RX Transceiver Serial Clock and Parallel Clock Internally derived from the data rate during IP core generation The serial clock is the bit clock to stream out serialized data.
The transceiver PLL supplies this clock and is internal to the transceiver. The parallel clock is for the transmitter PMA and PCS within the PHY. This clock is internal to the transceiver and is not exposed in the JESD204B IP core. For Arria V, Cyclone V, and Stratix V devices, these clocks are internally generated as the transceiver PLL is encapsulated within the JESD204B IP core's PHY. For Intel ® Arria ® 10 and Intel ® Stratix ® 10 devices, you need to generate the transceiver PLL based on the data rate and connect the serial and parallel clock. You are recommended to select medium bandwidth for the transceiver PLL setting.
These clocks are referred to as *serial_clk and *bonding_clock in Intel ® Arria ® 10 and Intel ® Stratix ® 10 devices. Refer to the respective Transceiver PHY IP Core User Guide for more information. TX/RX PHY Clock: txphy_clk rxphy_clk Data rate/40 (for all devices except Arria V GT/ST in PMA Direct mode) Data rate/80 (for Arria V GT/ST devices in PMA Direct mode) The PHY clock generated from the transceiver parallel clock for the TX path or the recovered clock generated from the CDR for the RX path. There is limited use for this clock. Avoid using this clock when PMA Direct mode is selected.
Use this clock only if the JESD204B configuration is F=4 and the core is operating at Subclass 0 mode. This clock can be used as input for both the txlink_clk and txframe_clk, or rxlink_clk and rxframe_clk. When you set the PCS option to enable Hard PCS or Soft PCS mode, the txphy_clk connects to the transceiver tx_std_clkout signal and the rxphy_clk connects to the rx_std_clkout signal. These are the clock lines at the PCS and FPGA fabric interface. When you enable PMA Direct mode (for Arria V GT/ST only), the txphy_clk connects to the transceiver tx_pma_clkout signal and the rxphy_clk connects to the rx_pma_clkout signal.
These are the clock lines at the PMA and PCS interface. TX/RX AVS Clock: jesd204_tx_avs_clk jesd204_rx_avs_clk 75–125 MHz The configuration clock for the JESD204B IP core CSR through the Avalon-MM interface. Transceiver Management Clock: reconfig_clk 100 MHz–125 MHz ( Intel ® Arria ® 10) 100 MHz–150 MHz ( Intel ® Stratix ® 10) The configuration clock for the transceiver CSR through the Avalon-MM interface. This clock is exported only when the transceiver dynamic reconfiguration option is enabled. This clock is only applicable for Intel ® Arria ® 10 and Intel ® Stratix ® 10 devices. For the JESD204 IP core in an FPGA logic device, you need one or two reference clocks as shown in and. In the single reference clock design, the device clock is used as the transceiver PLL reference clock and also the core PLL reference clock.
In the dual reference clock design, the device clock is used as the core PLL reference clock and the other reference clock is used as the transceiver PLL reference clock. The available frequency depends on the PLL type, bonding option, number of lanes, and device family. During IP core generation, the Intel ® Quartus ® Prime software recommends the available reference frequency for the transceiver PLL and core PLL based on user selection.
Note: Due to the clock network architecture in the FPGA, Intel recommends that you use the device clock to generate the link clock and use the link clock as the timing reference. You need to utilize the Intel ® FPGA PLL IP core (in Arria V and Stratix V devices) or Intel ® FPGA IOPLL IP core (in Intel ® Arria ® 10 and Intel ® Stratix ® 10 devices) to generate the link clock and frame clock. The link clock is used in the JESD204 IP core (MAC) and the transport layer.
You are recommended to supply the reference clock source through a dedicated reference clock pin. Based on the JESD204B specification for Subclass 1, the device clock is the timing reference and is source synchronous with SYSREF. To achieve deterministic latency, match the board trace length of the SYSREF signal with the device clock. Maintain a constant phase relationship between the device clock and SYSREF signal pairs going to the FPGA and converter devices. Ideally, the SYSREF pulses from the clock generator should arrive at the FPGA and converter devices at the same time. To avoid half link clock latency variation, you must supply the device clock at the same frequency as the link clock.
The JESD204B protocol does not support rate matching. Therefore, you must ensure that the TX or RX device clock ( pll_ref_clk) and the PLL reference clock that generates link clock ( txlink_clk or rxlink_clk) and frame clock ( txframe_clk or rxframe_clk) have 0 ppm variation. Both PLL reference clocks should come from the same clock chip.
The device clock is the timing reference for the JESD204B system. Due to the clock network architecture in the FPGA, JESD204 IP core does not use the device clock to clock the SYSREF signal because the GCLK or RCLK is not fully compensated. You are recommended to use the Intel ® FPGA PLL IP core (in Arria V and Stratix V devices) or Intel ® FPGA IOPLL IP core (in Intel ® Arria ® 10 and Intel ® Stratix ® 10 devices) to generate both the link clock and frame clock. The Intel ® FPGA PLL IP core must operate in normal mode or source synchronous mode and uses a dedicated reference clock pin as the input reference clock source to achieve the following state: • the GCLK and RCLK clock network latency is fully compensated. • the link clock and frame clock at the registers are phase-aligned to the input of the clock pin. To provide consistency across the design regardless of frame clock and sampling clock, the link clock is used as a timing reference. The Intel ® FPGA PLL IP core should provide both the frame clock and link clock from the same PLL as these two clocks are treated as synchronous in the design.
For Subclass 0 mode, the device clock is not required to sample the SYSREF signal edge. The link clock does not need to be phase compensated to capture SYSREF. Therefore, you can generate both the link clock and frame clock using direct mode in the Intel ® FPGA PLL IP core. If F = 4, where link clock is the same as the frame clock, you can use the parallel clock output from the transceiver ( txphy_clk or rxphy_clk signal) except when the PCS option is in PMA Direct mode.
The Local Multi-Frame Clock (LMFC) is a counter generated from the link clock and depends on the F and K parameter. The K parameter must be set between 1 to 32 and meet the requirement of at least a minimum of 17 octets and a maximum of 1024 octets in a single multi-frame. In a 32-bit architecture, the K × F must also be in the order of four. In a Subclass 1 deterministic latency system, the SYSREF frequency is distributed to the devices to align them in the system. The SYSREF resets the internal LMFC clock edge when the sampled SYSREF signal's rising edge transition from 0 to 1.
Due to source synchronous signaling of SYSREF with respect to the device clock sampling (provided from the clock chip), the JESD204 IP core does not directly use the device clock to sample SYSREF but instead uses the link clock to sample SYSREF. Therefore, the Intel ® FPGA PLL IP core that provides the link clock must to be in normal mode to phase-compensate the link clock to the device clock. Based on hardware testing, to get a fixed latency, at least 32 octets are recommended in an LMFC period so that there is a margin to tune the RBD release opportunity to compensate any lane-to-lane deskew across multiple resets.
If F = 1, then K = 32 is optimal as it provides enough margin for system latency variation. If F = 2, then K = 16 and above (18/20/22/24/26/28/30/32) is sufficient to compensate lane-to-lane deskew. The JESD204B IP core implements the local multi-frame clock as a counter that increments in link clock counts.
The local multi-frame clock counter is equal to (F × K/4) in link clock as units. The rising edge of SYSREF resets the local multi-frame clock counter to 0. There are two CSR bits that controls SYSREF sampling. • csr_sysref_singledet—resets the local multi-frame clock counter once and automatically cleared after SYSREF is sampled. This register also prevents CGS exit to bypass SYSREF sampling.
• csr_sysref_alwayson—resets the local multi-frame clock counter at every rising edge of SYSREF that it detects. This register also enables the SYSREF period checker.
If the provided SYSREF period violates the F and K parameter, an interrupt is triggered. However, this register does not prevent CGS-SYSREF race condition. The following conditions occur if both CSR bits are set: • resets the local multi-frame clock counter at every rising edge of SYSREF. • prevents CGS-SYSREF race condition. • checks SYSREF period.
JESD204B IP Core Resets Reset Signal Associated Clock Description txlink_rst_n rxlink_rst_n TX/RX Link Clock Active low reset. Intel ® recommends that you: • Assert the txlink_rst_n/ rxlink_rst_n and txframe_rst_n / rxframe_rst_n signals when the transceiver is in reset. • Deassert the txlink_rst_n and txframe_rst_n signals after the Intel ® FPGA PLL IP core is locked and the tx_ready[] signal from the Transceiver Reset Controller is asserted. • Deassert the rxlink_rst_n and rxframe_rst_n signals after the Transceiver CDR rx_islockedtodata[] signal and rx_ready[] signal from the Transceiver Reset Controller are asserted. The txlink_rst_n/ rxlink_rst_n and txframe_rst_n / rxframe_rst_n signals can be deasserted at the same time.
These resets can only be deasserted after you configure the CSR registers. Txframe_rst_n rxframe_rst_n TX/RX Frame Clock Active low reset controlled by the clock and reset unit. If the TX/RX link clock and the TX/RX frame clock has the same frequency, both can share the same reset. Tx_analogreset[L-1:0] rx_analogreset[L-1:0] Transceiver Native PHY Analog Reset Active high reset controlled by the transceiver reset controller. This signal resets the TX/RX PMA. The link clock, frame clock, and AVS clock reset signals ( txlink_rst_n/ rxlink_rst_n, txframe_rst_n/ rxframe_rst_n and jesd204_tx_avs_rst_n/ jesd204_rx_avs_rst_n) can only be deasserted after the transceiver comes out of reset.
Tx_analogreset_stat[L-1:0] rx_analogreset_stat[L-1:0] Transceiver Native PHY Analog Reset TX PMA analog reset status port connected to the transceiver reset controller. This signal is applicable for Intel ® Stratix ® 10 devices only. Tx_digitalreset[L-1:0] rx_digitalreset[L-1:0] Transceiver Native PHY Digital Reset Active high reset controlled by the transceiver reset controller. This signal resets the TX/RX PCS. The link clock, frame clock, and AVS clock reset signals ( txlink_rst_n/ rxlink_rst_n, txframe_rst_n/ rxframe_rst_n and jesd204_tx_avs_rst_n/ jesd204_rx_avs_rst_n) can only be deasserted after the transceiver comes out of reset. Tx_digitalreset_stat[L-1:0] rx_digitalreset_stat[L-1:0] Transceiver Native PHY Digital Reset TX PCS digital reset status port connected to the transceiver reset controller.
This signal is applicable for Intel ® Stratix ® 10 devices only. Jesd204_tx_avs_rst_n jesd204_rx_avs_rst_n TX/RX AVS (CSR) Clock Active low reset controlled by the clock and reset unit. Typically, both signals can be deasserted after the core PLL and transceiver PLL are locked and out of reset. If you want to dynamically modify the LMF at run-time, you can program the CSRs after AVS reset is deasserted. This phase is referred to as the configuration phase. After the configuration phase is complete, then only the txlink_rst_n/ rxlink_rst_n and txframe_rst_n/ rxframe_rst_n signals can be deasserted.
Hi Guys I need your help again. Here is the Scenario of what I want to do: 1- I have and input that is std_logic_vector (N downto 0) and 5. 1- I have and input that is std_logic_vector (N downto 0) and 5.