The Compact Muon Solenoid (CMS) Trigger of the Large Hadron Collider (LHC) particle accelerator at CERN selects potentially interesting particle collision data to process and archive for further study. The first stage of the trigger system, the hardware-based L1 Trigger, must sift through roughly 3 terabits/second of data describing the energy distribution of particles generated in the collisions, and reduce it to 100 megabits/second of event data that subsequent systems can handle. Without the CMS Trigger, the amount of experiment-generated data would quickly outstrip the archiving ability of the LHC system. Because of the sheer amount of input data and the rate at which it is generated, the hardware-based L1 Trigger is subject to stringent performance requirements. These will become even more severe as the LHC is upgraded over the next ten years, requiring a careful redesign of the L1 Trigger hardware. For example, future upgrades may introduce particle motion-tracking data into the L1 Trigger, resulting in an increased input data rate of up to 40 terabits/second. The need to modify the design as the LHC system is upgraded, the low-volume cost advantages of FPGAs, and a desire for a flexible and adaptable system all point toward the use of FPGAs as a hardware implementation solution. In this paper, we present several different FPGA implementations of the electron/photon identification module, a key part of the new Clustering Algorithm for the upgraded L1 Trigger. We analyze the resource requirements and performance tradeoffs, and present a qualitative discussion of flexibility to meet the changing needs of the CMS experiment. Finally, we narrow potential design choices to the top candidates and use one in a full Clustering Algorithm implementation.