Posted  by  admin

Sample Server Log File Download

2 - medium priority alert. 3 - low priority alert. 4 - very low priority alert. NOTE: Some values under the Sample Syslog Message are variables (i.e. Hostname of the devices, timestamps, etc.) and will be different to Syslog messages generated by another device.

  • Example log file. Figure 1 demonstrates some of the RSVP Agent processing. This log file was created using a LogLevel of 511. Lines with numbers displayed like 1 are annotations that are described following the log. RSVP Agent processing log. 01 03/22 08:51:01 INFO.main:. RSVP Agent started. 02 03/22 08.
  • Mar 18, 2013 Download HDInsight Sample Log File from Official Microsoft Download Center. Surface devices. Original by design. Windows Server 2003, Windows Server 2008, Windows.


We've renamed Microsoft Cloud App Security. It's now called Microsoft Defender for Cloud Apps. In the coming weeks, we'll update the screenshots and instructions here and in related pages. For more information about the change, see this announcement. To learn more about the recent renaming of Microsoft security services, see the Microsoft Ignite Security blog.

This article provides information about the following advanced configuration options for Defender for Cloud Apps Cloud Discovery log collectors:

Modify the log collector FTP configuration

Use these steps to modify the configuration for your Defender for Cloud Apps Cloud Discovery Docker.

Docker deployment

You might need to modify the configuration for the Defender for Cloud Apps Cloud Discovery Docker.

Changing the FTP password

  1. Connect to the log collector host.

  2. Run docker exec -it <collector name> pure-pw passwd <ftp user>

    1. Enter the new password.
    2. Enter the new password again for confirmation.
  3. Run docker exec -it <collector name> pure-pw mkdb to apply the change.

Customize certificate files

Follow this procedure to customize the certificate files you use for secure connections to the Cloud Discovery Docker.

  1. Open an FTP client and connect to the log collector.

  2. Navigate to the ssl_update directory.

  3. Upload new certificate files to the ssl_update directory (the names are mandatory).

    • For FTP: Only one file is required. The file has the key and certificate data, in that order, and is named pure-ftpd.pem.
    • For Syslog: Three files are required: ca.pem, **server-key.pem, and server-cert.pem. If any of the files are missing, the update won't take place.
  4. In a terminal window run: docker exec -t <collector name> update_certs. The command should produce a similar output to what's seen in the following screenshot.

  5. In a terminal window run: docker exec <collector name> chmod -R 700 /etc/ssl/private/.

Enable the log collector behind a proxy

After you configured the log collector, if you are running behind a proxy, the log collector might have trouble sending data to Defender for Cloud Apps. This may happen because the log collector doesn't trust the proxy's root certificate authority and is not able to connect to Microsoft Defender for Cloud Apps to retrieve its configuration or upload the received logs.

Use these steps to enable your log collector behind a proxy.


For information on how to change the certificates used by the log collector for Syslog or FTP, and to resolve connectivity issues from the firewalls and proxies to the log collector, see Modify the log collector FTP configuration.

Set up the log collector behind a proxy

Make sure you performed the necessary steps run Docker on a Windows or Linux machine and successfully download the Defender for Cloud Apps Docker image on the machine. For more information, see Configure automatic log upload for continuous reports.

Validate Docker log collector container creation

In the shell, verify that the container was created and is running using the following command:

Copy proxy root CA certificate to the container

From your virtual machine, copy the CA certificate to the Defender for Cloud Apps container. In the following example, the container is named Ubuntu-LogCollector and the CA certificate is named Proxy-CA.crt.Run the command on the Ubuntu host. It copies the certificate to a folder in the running container:

Set the configuration to work with the CA certificate

  1. Go into the container, using the following command. It will open bash in the log collector container:

  2. From a bash window inside the container, go to the Java jre folder. To avoid a version-related path error, use this command:

  3. Import the root certificate that you copied earlier, from the discovery folder into the Java KeyStore and define a password. The default password is 'changeit'. For information about changing the password, see How to change the Java KeyStore password.

  4. Validate that the certificate was imported correctly into the CA keystore, by using the following command to search for the alias you provided during the import (SelfSignedCert):

You should see your imported proxy CA certificate.

Set the log collector to run with the new configuration

The container is now ready.

Run the collector_config command using the API token that you used during the creation of your log collector:

When you run the command, specify your own API token:

The log collector is now able to communicate with Defender for Cloud Apps. After sending data to it, the status will change from Healthy to Connected in the Defender for Cloud Apps portal.



If you have to update the configuration of the log collector, to add or remove a data source for example, you normally have to delete the container and perform the previous steps again. To avoid this, you can re-run the collector_config tool with the new API token generated in the Defender for Cloud Apps portal.

How to change the Java KeyStore password › Details › WebserverlogsWeb Server Logs : Free Data : Free Download, Borrow And ...

  1. Stop the Java KeyStore server.
  1. Open a bash shell inside the container and go to the appdata/conf folder.

  2. Change the server KeyStore password by using this command:

  3. Change the certificate password by using this command:


    The default server alias is server.

  4. In a text editor, open the file, and then add the following lines of code, and then save the changes:

    1. Specify the new Java KeyStore password for the server: server.keystore.password=newStorePassword
    2. Specify the new Certificate password for the server: server.key.password=newKeyPassword
  5. Start the server.

Move the log collector to a different data partition on Linux

Many companies have the requirement to move data to a separate partition. Use these steps to move your Defender for Cloud Apps Docker log collector images to a data partition on your Linux host.

The following steps describe moving data to a partition called datastore and assumes you have already mounted the partition.


Adding and configuring a new partition on your Linux host is not in the scope of this guide.

  1. Stop the Docker service by using this command:

  2. Move the log collector data to the new partition by using this command:

  3. Remove the old Docker storage directory (/var/lib/docker) and create a symbolic link to the new directory (/datastore/docker).

  4. Start the Docker service by using this command:

  5. Optionally verify the status of your log collector by using this command:

Inspect the log collector disk usage on Linux

Use these steps to review your log collector disk usage and location.

  1. Identify the path to the directory where the log collector data is stored by using this command:

  2. Get the size on disk of the log collector using the identified path without the '/work' suffix:


    If you only need to know the size on disk, you can use this command: docker ps -s

Move the log collector to an accessible host

In regulated environments, access to Docker Hubs where the log collector image is hosted may be blocked. This prevents Defender for Cloud Apps from importing the data from the log collector and can be resolved my moving the log collector image to an accessible host.

Use these steps to download the log collector image using a computer that has access to Docker Hub and import it to your destination host.


  • The downloaded image can be imported either in your private repository or directly on your host. The following steps guide you through downloading your log collector image to your Windows computer and then uses WinSCP to move the log collector to your destination host.
  • To install Docker on your host, download the desired operating system:

After the download, use the offline installation guide to install your operating system.

Start the process by exporting the log collector image and then import the image to your destination host.

Export the log collector image from your Docker Hub

Use the steps relevant to the operating system of the Docker Hub where the log collector image is located.

Exporting the image on Linux

  1. On a Linux computer that has access to the Docker Hub, run the following command. This will install Docker and download the log collector image.

  2. Export the log collector image.


    It's important to use the output parameter to write to a file, instead of STDOUT.

  3. Download the log collector image to your Windows computer under C:mcasLogCollector using WinSCP.

Exporting the image on Windows

  1. On a Windows 10 computer that has access to the Docker Hub, install Docker Desktop.

  2. Download the log collector image.

  3. Export the log collector image.


    It's important to use the output parameter to write to a file, instead of STDOUT.

Import and load the log collector image to your destination host

Use these steps to transfer the exported image to your destination host.

  1. Upload the log collector image to your destination host under /tmp/.

  2. On the destination host, import the log collector image to the Docker images repository by using this command:

  3. Optionally, verify that the import completed successfully by using this command:

    You can now proceed to create your log collector using the image from the destination host.

Define custom ports for Syslog and FTP receivers for log collectors on Linux

Sample Server Log File Download

Some organizations have a requirement to define custom ports for Syslog and FTP services.When adding a data source, Defender for Cloud Apps log collectors uses specific port numbers to listen for traffic logs from one or more data sources.

The following table lists of the default listening ports for receivers:

Receiver typePorts
Syslog* UDP/514 - UDP/51x
* TCP/601 - TCP/60x

Use these steps to define custom ports.

  1. In Defender for Cloud Apps, click the settings icon followed by Log collectors.

  2. On the Log collectors tab, add or edit a log collector and after updating the data sources, copy the run command from the dialog.


    If used as provided, the following wizard provided command configures the log collector to use ports 514/udp and 515/udp.

  3. Before using the command on your host machine, modify the command to use your custom ports. For example, to configure the log collector to use UDP ports 414 and 415, change the command as follows:


    Only the Docker mapping is modified. The internally assigned ports are not changed enabling you to choose any listening port on the host.

Validate the traffic and log format received by log collector on Linux

Occasionally, you may need to investigate issues such as the following:

Sample Server Log File Download
  • Log collectors are receiving data: Validate that log collectors are receiving Syslog messages from your appliances and are not blocked by firewalls.
  • Received data is in the correct log format: Validate the log format to help you troubleshoot parsing errors by comparing the log format expected by Defender for Cloud Apps and the one sent by your appliance.

Use these steps to validate the traffic received by log collectors.

  1. Sign in to your server hosting the Docker container.

  2. Validate that the log collector is receiving Syslog messages using any of the following methods:

    • By using tcpdump, or similar command to analyze network traffic on port 514:

      If everything is correctly configured, you should see network traffic from your appliances.

    • By using netcat, or similar command to analyze network traffic on the host machine:

      1. Install netcat and wget.

      2. Download, and if required unzip, a sample log, as follows:

        1. In the Defender for Cloud Apps portal, click Discover, and then click Create snapshot report.
        2. Select the Data source from which you want to upload the log files.
        3. Click View and verify then right-click Download sample log and copy the URL address link.
        4. Click Close.
        5. Click Cancel.
      1. Run netcat to stream the data to the log-collector.

      If the collector is correctly configured, the log data will be present in the messages file and shortly after that it will be uploaded to the Defender for Cloud Apps portal.

    • By inspecting relevant files within the Defender for Cloud Apps Docker container:

      1. Log in to the container by using this command:
      1. Determine if Syslog messages are being written to the messages file by using this command:

      If everything is correctly configured, you should see network traffic from your appliances.


      This file will continue to be written to until it reaches 40 KB in size.

  3. Review logs that have been uploaded to Defender for Cloud Apps in the /var/adallom/discoverylogsbackup directory.

  4. Validate the log format received by the log collector by comparing the messages stored in /var/adallom/discoverylogsbackup to the sample log format provided in the Defender for Cloud Apps Create log collector wizard.


If you want to use your own sample log but don't have access to the appliance, use the following commands to write the output of the messages file (located in the og collector's syslog directory) to a local file on the host.

Compare the output file (/tmp/log.log) to the messages stored in /var/adallom/discoverylogsbackup.

Next steps

If you run into any problems, we're here to help. To get assistance or support for your product issue, please open a support ticket.

Be sure to read Part 1 andPart 3of our DNS Log Collection series, in case you missed them.

DNS Log Collection on Windows

If you need to reduce the cost of DNS security and increase efficiency throughcentralizing DNS log collection, where would you start? Answering this questionrequires knowledge and awareness of the challenges and opportunities availableon the Windows platform.WhileWindows DNS serveris a common technology serving many types of organizations, from local domainsto large multi-site enterprises, the possibilities are not necessarily thatwell-known within the context of comprehensive, site-wide log collection.This article distills the main concepts essential to planning and deployingsuch an implementation into this article, which serves as the second part ofthe DNS log collection series. To start, this article will touch on log sourcesthat are generated by Windows DNS servers as well as the DNS requests of theclients they serve.

Windows DNS Log Sources


You may know that there are numerous ways of collecting DNS logs within the Windows environment:

  • Collecting DNS query logs viaSysmon

  • Collecting traces directly with Event Tracing for Windows (ETW) DNS Providers

  • Collecting from the relevant Windows Event Log channels

  • File-based DNS debug logging

The deployment and resources to be used for DNS log collection will alsodepend on whether the logs will be collected from the DNS server (a criticalasset) or from DNS clients. Each of these will be covered in further detail inthis blog post.

Collecting DNS Query Logs from Sysmon

As of Sysmon version 10.0, there is a DNS Query logging feature to collect DNSquery logs from clients. These events are generated when a process executes aDNS query, whether the result is successful or fails, cached or not.

Depending on how Sysmon is configured, you can also set additional rules inthe configuration file for Sysmon in relation toEvent ID 22: DNSEvent (DNS query).This is advisable due to the noisy nature of this type of event. These types ofadditions can be:

  • Exclusion rules to avoid logging reverse DNS lookups

  • Exclusion rules about which domains to exclude. If excluding certain toplevel domains (to reduce the amount of logs collected), be more specificwith domains

  • Rules to exclude IPv6 lookups

  • Rules to omit domains typically used in sandboxes like localhost

  • Rules to omit queries involving popular third-party applications likeGoogle, Mozilla, as well as CDNs

  • Rules to omit sites that involve social media widgets like Disqus

  • Rules to exclude ad serving sites and other ad-related servicesThese are only suggestions for rules and are by all means non-exhaustive. Thereare Sysmon configuration samples available online for use and adaptation.

Since DNS queries generate a large amount of logs, you may opt to forwardSysmon DNS events in their own output stream to a central log server instead ofmerging them with other DNS client event sources.

Collecting from DNS ETW Providers

Windows 10 File Server Setup

The DNS ETW providers with their corresponding GUIDs are displayed in the tablebelow.

Table 1. List of ETW Providers
ETW Provider NameGUID

DNS Server Trace Provider








Most of the time, ETW is not considered as a log source, eitherbecause it is not widely known, or because special tools are neededto keep track of log traces (see Solving Windows Log Collection Challenges with Event Tracing).In addition, these tools can negatively affect DNS server performance, especiallyif they are set to continuously collect and write event traces to disk or convertto a format like JSON before being forwarded to a remote host.

Enhanced Windows DNS Event Log Logging

Enhanced DNS Serveraudit eventsare available via both the Windows Event Log channels, such as theMicrosoft-Windows-DNSServer/Audit channel, as well asdirectly from the Windows Event Tracing (ETW) provider. These enable changetracking on Windows DNS Server, provided audit events are set to be logged inthe Group Policy Editor. If enabled, an audit event is logged for eachinstance when changes are made to the DNS server such as:

Windows DNS Audit Events

  • Zone operations – zone deletions, updates, zone record creation anddeletion, zone scope creation and deletion, online signing (zone signing/unsigning/resigning), pausing/reloading/resuming zones

  • DNSSEC operations – key rollover events, export/importing of DNSSECmetadata, addition of trust point

  • Cache operations (cache purge events)

  • Policy operation events – creation/deletion/updating of records suchas client subnet records, server level policies or zone level policies

  • Other server operations – restarting the server, clearing of debuglogs, clearing of statistics, scavenging operations

These audit events represent important operations for any DNS server. They canprovide very useful information for security and compliance reasons, as wellas for incident response.

Ensure that auditing is enabled on Windows DNS Server via the Group PolicyManagement Editor. You can also configure auditing on the target object viathe ADSIEDIT.MSC console by making the necessary changes for the auditingproperties of that object.

The following is an event sample from Microsoft Windows DNS Server for the auditevent 513 (Type: Zone delete, Category: Zone operations) generated by theMicrosoft-Windows-DNSServer channel.

Windows DNS Analytical Events

DNSanalytical eventsdiffer from DNS auditing in that they are generated each time Windows DNS Serverprocesses a request. They need to be enabled on the DNS serverbefore logging can happen.

  • Look up events – response success/failure, CNAME lookups, internallookups

  • Recursive query events

  • Dynamic update events

  • Zone XFR events

The following sample shows Event ID 280 (Type: Internal lookup additional,Category: Lookup) that is generated by ETW ProviderMicrosoft-Windows-DNSServer.

Active Directory and Native DNS Auditing

DNS is automatically installed with Active Directory as the Global Catalog serverfor the forest and domain. There are a number of features available in Windows DNSServer, such as Native DNS Auditing.

However, systems prior to 2012 R2, or 2012 R2 without hotfix 2956577 do nothave native DNS auditing capabilities included. When this is enabled, DNSchanges can be audited by enabling AD Directory Services auditing. For moreinformation, see the AD DS Auditing Step-by-Step Guide on Microsoft Docs.

Collecting File-based Microsoft DNS Debug Log Files

The DNS debug file is important since it contains detailed information on DNSqueries and activity that is sent and received by the DNS server.

The following debug log sample displays a simple DNS query test from Windows DNSServer:

Due to the amount of logs being generated from DNS debug logging, it isrecommended to rotate logs and have them collected on a central server. Also,parsing the logs is suggested, in order to select which logs to enrich.Although DNS debug logging has some advantages, it does come with someadditional caveats worth considering:

  • Due to the way Microsoft handles log rollover of DNS debug logs, if thelog file is located on any drive other than the C: drive, the Windows DNSservice may not recreate the debug log file after a rollover. SeeThe disappearing Windows DNS debug logfor an in-depth analysis of this issue.

  • The log information gleaned from DNS debug logging is inherentlyunstructured. Parsing is required to create usable event logs. If theDetails option has been selected, regular expressions are needed to parsethe event fields. Such configurations are complex and can be associated withadditional performance overhead. For busy DNS servers, this would not be arecommended option. For more information seeFile-based DNS Debug Logging.

Performance Considerations

Depending on which of these logging methods you use, there are a few variables thatcan affect performance:

  • The DNS server’s hardware specifications.

  • The QPS (queries per second) rate.

  • The place where log enrichment or parsing is done. It can be done eitherlocally or on a central logging server after the logs are received.

  • The type of logging taking place. It is recommended to enable DNS debuglogging only temporarily as needed.

All these factors play a role in influencing log performance.

What can NXLog do?

NXLog simplifies DNS log collection by providing a single software solution thatincorporates the various technologies required to efficiently collect DNSrelated logs. NXLog offers the following methods for the above discussed DNSlogging technologies.

Sysmon Log Collection

Use the im_msvistalog module and add the relevant Query in theconfiguration file. Find out more atCollecting DNS logs via Sysmonin the NXLog User Guide.

ETW (Event Tracing for Windows) Collection

There is a module,im_etw, that is specifically designed to collect logs from ETWproviders without much performance overhead. It acts both as a Controller anda Consumer (seeUsing NXLog as a Single Agent Solution to Collect ETW Logs).

Native Windows Event Log Collection

Windows Server Logs

For DNS events that can be collected from the Windows Event Log, includingSysmon, use the im_msvistalog module and specify a query for the nameof the channel and channel type. You can also add additional filtering to thequery. See Windows Event Log.

File-based Log Collection from the Windows DNS Debug File

There is a section in our User Guide detailing the steps involved for thesetup of DNS debug logging including Parsing Non-Detailed Logs With xm_msdns.


Reading Host Logs

With this article, you have learned about the opportunities and challenges withthese modes of Windows DNS log collection: Sysmon, Event Tracing forWindows (ETW), Windows Event Log and Windows DNS debug file logging. You havealso learned about possible DNS performance considerations andthe solutions available for DNS log collection. With this knowledgeof the various solutions available, you can avoid the pitfalls ofdeploying less efficient solutions, or ending up with a deployment that iseither logging too many or not enough DNS events.

DNS, for many reasons, is an important asset that must not be overlooked. Itis known that attackers are abusing DNS, and it is through efficient andreliable DNS log collection that you can reap the benefits of this essentialcomponent of security monitoring. Our white paper,The Importance of DNS Logging in Enterprise Security expands on this theme.

Sample Web Server Log File Download

Download a fully functional trial of the Enterprise Edition for free