Secure aggregation protects the local models of the users in federated learning, by not allowing the server to obtain any information beyond the aggregate model at each iteration. Naively implementing secure aggregation fails to protect the integrity of the aggregate model in the possible presence of a malicious server forging the aggregation result, which motivates verifiable aggregation in federated learning. Existing verifiable aggregation schemes either have a linear complexity in model size or require time-consuming reconstruction at the server, that is quadratic in the number of users, in case of likely user dropouts. To overcome these limitations, we propose LightVeriFL, a lightweight and communication-efficient secure verifiable aggregation protocol, that provides the same guarantees for verifiability against a malicious server, data privacy, and dropout-resilience as the state-of-the-art protocols without incurring substantial communication and computation overheads. The proposed LightVeriFL protocol utilizes homomorphic hash and commitment functions of constant length, that are independent of the model size, to enable verification at the users. In case of dropouts, LightVeriFL uses a one-shot aggregate hash recovery of the dropped-out users, instead of a one-by-one recovery, making the verification process significantly faster than the existing approaches. Comprehensive experiments show the advantage of LightVeriFL in practical settings.