Server2 is 3 hops away from its receivers
![](https://www.atlink.fr/wp-content/uploads/2019/07/image-32.png)
R3 receives the multicast and forwards it to its forwarding list interfaces
![](https://www.atlink.fr/wp-content/uploads/2019/07/image-33.png)
At R2
![](https://www.atlink.fr/wp-content/uploads/2019/07/image-34.png)
At R2 the outgoing interface list includes also the incoming interface
The incoming interface list is Null and the RPF neighbor indication is incorrect
We check that no route exists at R2 to reach the source
![](https://www.atlink.fr/wp-content/uploads/2019/07/image-35.png)
We activate a debug ip multicast routing, and check that RPF (Reverse Path Forwarding) fails
![](https://www.atlink.fr/wp-content/uploads/2019/07/image-36.png)
We configure a static route to Server2
![](https://www.atlink.fr/wp-content/uploads/2019/07/image-37.png)
Now the mroute table is populated with a correct information
![](https://www.atlink.fr/wp-content/uploads/2019/07/image-38.png)
We do the same thing on R1 and check the mroute information about 226.0.0.1 multicast group
![](https://www.atlink.fr/wp-content/uploads/2019/07/image-39.png)
We check that Server1 is receiving now replies to its pings from Client1
![](https://www.atlink.fr/wp-content/uploads/2019/07/image-40.png)
In this lab PIM is configured in dense mode
![](https://www.atlink.fr/wp-content/uploads/2019/07/image-41.png)
At R2
![](https://www.atlink.fr/wp-content/uploads/2019/07/image-42.png)
At R3
![](https://www.atlink.fr/wp-content/uploads/2019/07/image-43.png)
PIM is our multicast protocol that helps deliver multicast trafic in a network similar to our lab setup (where routing is necessary)
The dense variant of PIM floods the server trafic multicast domain wide (all routers and corresponding interfaces configured for PIM)
Let’s check the flood and prune behavior of PIM dense mode
Client1 is no more interested in 226.0.0.1
![](https://www.atlink.fr/wp-content/uploads/2019/07/image-44.png)
How R1 would react?
To see PIM into run we enable a debug ip pim
![](https://www.atlink.fr/wp-content/uploads/2019/07/image-45.png)
We check the IGMP active memebership
![](https://www.atlink.fr/wp-content/uploads/2019/07/image-46.png)
We notice the the 226.0.0.1 in IGMP table is flaged as S for static and A for aggregate
The field Exp. Which indicates the status of the tracking shows that Client1 has stopped joining the group
For Test we reconfigure Client1 to join the group and the result
![](https://www.atlink.fr/wp-content/uploads/2019/07/image-47.png)
The Exp. field downcounter is enabled again
As soon as we stop IGMP (static or dynamic) memberships, R1 inserts a prune in RPF neighbor queue
![](https://www.atlink.fr/wp-content/uploads/2019/07/image-48.png)
In the mroute table the entry is flagged as P, which indicates that it’s pruned
![](https://www.atlink.fr/wp-content/uploads/2019/07/image-49.png)
Additionnally the outgoing interface list is changed to Null
If Client1 joins again, R1 builds a message for its PIM RPF neighbor to ask for group multicast and populates the outgoing interface with the corresponding interface
![](https://www.atlink.fr/wp-content/uploads/2019/07/image-50.png)
To be continued…