Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Multitenant logging in Container insights is useful for customers who operate shared cluster platforms using AKS. You may need the ability to configure container console log collection in a way that segregates logs by different teams so that each has access to the container logs of the containers running in K8s namespaces that they own and the ability to access the billing and management associated with the Azure Log analytics workspace. For example, container logs from infrastructure namespaces such as kube-system can be directed to a specific Log Analytics workspace for the infrastructure team, while each application team's container logs can be sent to their respective workspaces.
This article describes how multitenant logging works in Container insights, the scenarios it supports, and how to onboard your cluster to use this feature.
Scenarios
The multitenant logging feature in Container insights supports the following scenarios:
Multitenancy. Sends container logs (stdout & stderr) from one or more K8s namespaces to corresponding Log Analytics workspace.
Multihoming: Sends the same set of container logs (stdout & stderr) from one or more K8s namespaces to multiple Log Analytics workspaces.
How it works
Container insights use a data collection rule (DCR) to define the data collection settings for your AKS cluster. A default ContainerInsights Extension DCR is created automatically when you enable Container insights. This DCR is a singleton meaning there is one DCR per Kubernetes cluster.
For multitenant logging, Container Insights adds support for ContainerLogV2Extension DCRs, which are used to define collection of container logs for K8s namespaces. Multiple ContainerLogV2Extension DCRs can be created with different settings for different namespaces and all associated with the same AKS cluster.
When you enable the multitenancy feature through a ConfigMap, the Container Insights agent periodically fetches both the default ContainerInsights extension DCR and the ContainerLogV2Extension DCRs associated with the AKS cluster. This fetch is performed every 5 minutes beginning when the container is started. If any additional ContainerLogV2Extension DCRs are added, they'll be recognized the next time the fetch is performed. All configured streams in the default DCR aside from container logs continue to be sent to the Log Analytics workspace in the default ContainerInsights DCR as usual.
The following logic is used to determine how to process each log entry:
- If there is a ContainerLogV2Extension DCR for the namespace of the log entry, that DCR is used to process the entry. This includes the Log Analytics workspace destination and any ingestion-time transformation.
- If there isn't a ContainerLogV2Extension DCR for the namespace of the log entry, the default ContainerInsights DCR is used to process the entry.
Limitations
- See Limitations for high scale logs collection in Container Insights.
- A maximum of 30 ContainerLogV2Extension DCR associations are supported per cluster.
Prerequisites
- High log scale mode must be configured for the cluster using the guidance at High scale logs collection in Container Insights (Preview).
- A data collection endpoint (DCE) is created with the DCR for each application or infrastructure team. The Logs Ingestion endpoint of each DCE must be configured in the firewall as described in Network firewall requirements for high scale logs collection in Container Insights.
Enable multitenancy for the cluster
Follow the guidance in Configure and deploy ConfigMap to download and update ConfigMap for the cluster.
Enable high scale mode by changing the
enabled
setting underagent_settings.high_log_scale
as follows.agent-settings: |- [agent_settings.high_log_scale] enabled = true
Enable multitenancy by changing the
enabled
setting underlog_collection_settings.multi_tenancy
as follows.log-data-collection-settings: |- [log_collection_settings] [log_collection_settings.multi_tenancy] enabled = true
Apply the ConfigMap to the cluster with the following commands.
kubectl config set-context <cluster-name> kubectl apply -f <configmap_yaml_file.yaml>
Create DCR for each application or infrastructure team
Repeat the following steps to create a separate DCR for each application or infrastructure team. Each will include a set of K8s namespaces and a Log Analytics workspace destination.
Tip
For multihoming, deploy a separate DCR template and parameter file for each Log Analytics workspace and include the same set of K8s namespaces. This will enable the same logs to be sent to multiple workspaces. For example, if you want to send logs for app-team-1, app-team-2 to both LAW1 and LAW2,
- Create DCR1 and include LAW1 for app-team-1 and app-team-2 namespaces
- Create DCR2 and include LAW2 for app-team-1 and app-team-2 namespaces
Retrieve the following ARM template and parameter file.
Template: http://aka.ms/aks-enable-monitoring-multitenancy-onboarding-template-file
Parameter: http://aka.ms/aks-enable-monitoring-multitenancy-onboarding-template-parameter-fileEdit the parameter file with values for the following values.
Parameter Name Description aksResourceId
Azure Resource ID of the AKS cluster aksResourceLocation
Azure Region of the AKS cluster workspaceResourceId
Azure Resource ID of the Log Analytics workspace workspaceRegion
Azure Region of the Log Analytics workspace K8sNamespaces
List of K8s namespaces for logs to be sent to the Log Analytics workspace defined in this parameter file. resourceTagValues
Azure Resource Tags to use on AKS, data collection rule (DCR), and data collection endpoint (DCE). transformKql
KQL filter for advance filtering using ingestion-time transformation. For example, to exclude the logs for a specific pod, use source \| where PodName != '<podName>'
. See Transformations in Azure Monitor for details.useAzureMonitorPrivateLinkScope
Indicates whether to configure Azure Monitor Private Link Scope. azureMonitorPrivateLinkScopeResourceId
Azure Resource ID of the Azure Monitor Private Link Scope. Deploy the template using the parameter file with the following command.
az deployment group create --name AzureMonitorDeployment --resource-group <aksClusterResourceGroup> --template-file existingClusterOnboarding.json --parameters existingClusterParam.json
Disabling multitenant logging
Note
See Disable monitoring of your Kubernetes cluster if you want to completely disable Container insights for the cluster.
Use the following steps to disable multitenant logging on a cluster.
Use the following command to list all the DCR associations for the cluster.
az monitor data-collection rule association list-by-resource --resource /subscriptions/<subId>/resourcegroups/<rgName>/providers/Microsoft.ContainerService/managedClusters/<clusterName>
Use the following command to delete all DCR associations for ContainerLogV2 extension.
az monitor data-collection rule association delete --association-name <ContainerLogV2ExtensionDCRA> --resource /subscriptions/<subId>/resourcegroups/<rgName>/providers/Microsoft.ContainerService/managedClusters/<clusterName>
Delete the ContainerLogV2Extension DCR.
az monitor data-collection rule delete --name <ContainerLogV2Extension DCR> --resource-group <rgName>
Edit
container-azm-ms-agentconfig
and change the value forenabled
under[log_collection_settings.multi_tenancy]
fromtrue
tofalse
.kubectl edit cm container-azm-ms-agentconfig -n kube-system -o yaml
Troubleshooting
Perform the following steps to troubleshoot issues with multitenant logging in Container insights.
Verify that high scale logging is enabled for the cluster.
# get the list of ama-logs and these pods should be in Running state # If these are not in Running state, then this needs to be investigated kubectl get po -n kube-system | grep ama-logs # get the logs one of the ama-logs daemonset pod and check for log message indicating high scale enabled kubectl logs ama-logs-xxxxx -n kube-system -c ama-logs | grep high # output should be something like "Using config map value: enabled = true for high log scale config"
Verify that ContainerLogV2 schema is enabled for the cluster.
# get the list of ama-logs and these pods should be in Running state # If these are not in Running state, then this needs to be investigated kubectl get po -n kube-system | grep ama-logs # exec into any one of the ama-logs daemonset pod and check for the environment variables kubectl exec -it ama-logs-xxxxx -n kube-system -c ama-logs -- bash # check if the containerlog v2 schema enabled or not env | grep AZMON_CONTAINER_LOG_SCHEMA_VERSION # output should be v2. If not v2, then check whether this is being enabled through DCR AZMON_CONTAINER_LOG_SCHEMA_VERSION=v2 # check if its enabled through DCR grep -r "enableContainerLogV2" /etc/mdsd.d/config-cache/configchunks/ # validate the enableContainerLogV2 configured with true or not from JSON output
Verify that multitenancy is enabled for the cluster.
# get the list of ama-logs and these pods should be in Running state # If these are not in Running state, then this needs to be investigated kubectl get po -n kube-system | grep ama-logs # get the logs one of the ama-logs daemonset pod and check for log message indicating high scale enabled kubectl logs ama-logs-xxxxx -n kube-system -c ama-logs | grep multi_tenancy # output should be something like "config::INFO: Using config map setting multi_tenancy enabled: true, advanced_mode_enabled: false and namespaces: [] for Multitenancy log collection"
Verify that the DCRs and DCEs related to ContainerInsightsExtension and ContainerLogV2Extension are created.
az account set -s <clustersubscriptionId> az monitor data-collection rule association list-by-resource --resource "<clusterResourceId>" # output should list both ContainerInsightsExtension and ContainerLogV2Extension DCRs associated to the cluster # From the output, for each dataCollectionRuleId and check dataCollectionEndpoint associated or not az monitor data-collection rule show --ids <dataCollectionRuleId> # you can also check the extension settings for the K8s namespace configuration
Verify that the agent is downloading all the associated DCRs.
# get the list of ama-logs and these pods should be in Running state # If these are not in Running state, then this needs to be investigated kubectl get po -n kube-system | grep ama-logs # exec into any one of the ama-logs daemonset pod and check for the environment variables kubectl exec -it ama-logs-xxxxx -n kube-system -c ama-logs -- bash # check if its enabled through DCR grep -r "ContainerLogV2Extension" /etc/mdsd.d/config-cache/configchunks # output should list all the associated DCRs and configuration # if there are no DCRs downloaded then likely Agent has issues to pull associate DCRs and this could be missing network or firewall issue and check for errors in mdsd.err log file cat /var/opt/microsoft/linuxmonagent/log/mdsd.err
Check if there are any errors in fluent-bit-out-oms-runtime.log file
# get the list of ama-logs and these pods should be in Running state # If these are not in Running state, then this needs to be investigated kubectl get po -n kube-system | grep ama-logs # exec into any one of the ama-logs daemonset pod and check for the environment variables kubectl exec -it ama-logs-xxxxx -n kube-system -c ama-logs -- bash # check for errors cat /var/opt/microsoft/docker-cimprov/log/fluent-bit-out-oms-runtime.log
Next steps
- Read more about Container insights.