Same-day delivery (SDD) services have become increasingly popular in recent years. These have been usually modeled by previous studies as a certain class of dynamic vehicle routing problem (DVRP) where goods must be delivered from a depot to a set of customers in the same day that the orders were placed. Adaptive exact solution methods for DVRPs can become intractable even for small problem instances. In this paper, the same-day delivery problem (SDDP) is formulated as a Markov decision process (MDP) and it is solved using a parameter-sharing Deep Q-Network, which corresponds to a decentralised multi-agent reinforcement learning (MARL) approach. For this, a multi-agent grid-based SDD environment is created, consisting of multiple vehicles, a central depot, and dynamic order generation. In addition, zone-specific order generation and reward probabilities are introduced. The performance of the proposed MARL approach is compared against a mixed-integer programming (MIP) solution. Results show that the proposed MARL framework performs on par with MIP-based policy when the number of orders is relatively low. For problem instances with higher order arrival rates, computational results show that the MARL approach underperforms MIP by up to 30pc. The performance gap between both methods becomes smaller when zone-specific parameters are employed. Execution time results indicate that the MARL approach is, on average, 65 times faster than the MIP-based policy, and therefore may be more advantageous for real-time control, at least for small-sized instances.